• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 42
  • 29
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 252
  • 134
  • 56
  • 54
  • 53
  • 51
  • 50
  • 46
  • 46
  • 45
  • 42
  • 40
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Optimization of demodulation performance of the GPS and GALILEO navigation messages / Optimisation de la performance de démodulation des messages de navigation GPS et GALILEO

Garcia Peña, Axel Javier 08 October 2010 (has links)
La performance de démodulation des signaux GNSS existants, GPS L1 C/A, L2C ou L5, est satisfaisante en environnements ouverts où le C/N0 disponible est assez élevé. Cependant, en milieu urbain, le niveau de C/N0 du signal reçu est souvent très bas et est affecté de variations rapides qui peuvent nuire la démodulation des messages GNSS. Donc, car les applications du marché de masse sont appelées à être déployées dans ces environnements, il est nécessaire d'étudier et de chercher des méthodes de démodulation/décodage qui améliorent la performance de démodulation des messages GNSS dans ces environnements. Il est aussi nécessaire de considérer les nouveaux signaux GPS L1C et GALILEO E1. Ces signaux doivent fournir un service de positionnement par satellite dans tout type d'environnement, et spécifiquement en milieu urbain. Ainsi, cette thèse analyse aussi les performances de démodulation des nouveaux signaux GNSS tels que définis dans les documents publics actuels. De plus, de nouvelles structures de message GALILEO E1 sont proposées et analysées afin d'optimiser la performance de démodulation ainsi que la quantité d'information diffusée. En conséquence, le but principal de cette thèse est d'analyser et améliorer la performance de démodulation des signaux GNSS ouverts au public, spécifiquement en milieu urbain, et de proposer de nouvelles structures de messages de navigation pour GALILEO E1. La structure détaillée des chapitres de cette thèse est donnée ci-après. En premier lieu, le sujet de cette thèse est introduit, ses contributions originales sont mises en avant, et le plan du rapport est présenté. Dans le 2ième chapitre, la thèse décrit la structure actuelle des signaux GNSS analysés, en se concentrant sur la structure du message de navigation, les codages canal implantés et leurs techniques de décodage. Dans le 3ième chapitre, deux types de modèles de canal de propagation sont présentés pour deux différents types de scénarios. D'un côté, un canal AWGN est choisi pour modéliser les environnements ouverts. De l'autre côté, le modèle mathématique de Perez-Fontan d'un canal mobile est choisi pour représenter les environnements urbains et indoor. Dans le 4ième chapitre, une tentative pour effectuer une prédiction binaire d'une partie du message de navigation GPS L1 C/A est présentée. La prédiction est essayée en utilisant les almanachs GPS L1 C/A, grâce à un programme de prédiction à long terme fourni par TAS-F, et des méthodes de traitement du signal: estimation spectrale, méthode de PRONY et réseau de neurones. Dans le 5ème chapitre, des améliorations à la performance de démodulation du message de GPS L2C et L5 sont apportées en utilisant leur codage canal de manière non traditionnelle. Deux méthodes sont analysées. La première méthode consiste à combiner les codages canal internes et externes du message afin de corriger davantage de mots reçus. La deuxième méthode consiste à utiliser les probabilités des données d'éphémérides afin d'améliorer le décodage traditionnel de Viterbi. Dans le 6ième chapitre, la performance de démodulation des messages de GPS L1C et du Open Service GALILEO E1 est analysée dans différents environnements. D'abord, une étude de la structure de ces deux signaux est présentée pour déterminer le C/N0 du signal utile reçu dans un canal AWGN. Puis, la performance de démodulation de ces signaux est analysée grâce à des simulations dans différents environnements, avec un récepteur se déplaçant à différentes vitesses et avec différentes techniques d'estimation de la phase porteuse du signal. / The demodulation performance achieved by any of the existing GPS signals, L1 C/A, L2C or L5, is satisfactory in open environments where the available C/N0 is quite high. However, in indoor/urban environments, the C/N0 level of the received signal is often very low and suffers fast variations which can further affect the GNSS messages demodulation. Therefore, since the mass-market applications being designed nowadays are aimed at these environments, it is necessary to study and to search alternative demodulation/decoding methods which improve the GNSS messages demodulation performance in these environments. Moreover, new GNSS signals recently developed, such as GPS L1C and GALILEO E1, must also be considered. These signals aim at providing satellite navigation positioning service in any kind of environment, giving special attention to indoor and urban environments. Therefore, the demodulation performances of the new GNSS signals as they are defined in the current public documents is also analysed. Moreover, new GALILEO E1 message structures are proposed and analysed in order to optimize the demodulation performance as well as the quantity of broadcasted information. Therefore, the main goal of this dissertation is to analyse and to improve the demodulation performance of the current open GNSS signals, specifically in indoor and urban environments, and to propose new navigation message structures for GALILEO E1. A detailed structure of this dissertation sections is given next. First, the subject of this thesis is introduced, original contributions are highlighted, and the outline of the report is presented. Second, this dissertation begins by a description of the current structure of the different analysed GNSS signals, paying special attention to the navigation message structure, implemented channel code and their decoding techniques. In the third section, two types of transmission channel models are presented for two different types of environments. On one hand, an AWGN channel is used to model the signal transmission in an open environments. On the other hand, the choice of a specific mobile channel, the Perez-Fontan channel model, is chosen to model the signal transmission in an urban environment. In the fourth section, a tentative to make a binary prediction of the broadcasted satellite ephemeris of the GPS L1 C/A navigation message is presented. The prediction is attempted using the GPS L1 C/A almanacs data, a long term orbital prediction program provided by TAS-F, and some signal processing methods: spectral estimation, the PRONY method, and a neural network. In the fifth section, improvements to the GPS L2C and GPS L5 navigation message demodulation performance are brought by using their channel codes in a non-traditional way. Two methods are inspected. The first method consists in sharing information between the message inner and outer channel codes in order to correct more received words. The second method consists in using the ephemeris data probabilities in order to improve the traditional Viterbi decoding. In the sixth section, the GPS L1C and GALILEO E1 Open Service demodulation performance is analysed in different environments. First, a brief study of the structure of both signals to determine the received C/N0 in an AWGN channel is presented. Second, their demodulation performance is analysed through simulations in different environments, with different receiver speeds and signal carrier phase estimation techniques.
242

A Modified Sum-Product Algorithm over Graphs with Short Cycles

Raveendran, Nithin January 2015 (has links) (PDF)
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most in graphical structures with short cycles. This research work is a step forward towards understanding the effect of short cycles on error floors of the sum-product algorithm. We propose a modified sum-product algorithm by considering the statistical dependency of the messages passed in a cycle of length 4. We also formulate a modified algorithm in the log domain which eliminates the numerical instability and precision issues associated with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm. This suggests that dependency among messages improves the decisions and successfully mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.
243

Sparse graph codes on a multi-dimensional WCDMA platform

Vlok, Jacobus David 04 July 2007 (has links)
Digital technology has made complex signal processing possible in communication systems and greatly improved the performance and quality of most modern telecommunication systems. The telecommunication industry and specifically mobile wireless telephone and computer networks have shown phenomenal growth in both the number of subscribers and emerging services, resulting in rapid consumption of common resources of which the electromagnetic spectrum is the most important. Technological advances and research in digital communication are necessary to satisfy the growing demand, to fuel the demand and to exploit all the possibilities and business opportunities. Efficient management and distribution of resources facilitated by state-of-the-art algorithms are indispensable in modern communication networks. The challenge in communication system design is to construct a system that can accurately reproduce the transmitted source message at the receiver. The channel connecting the transmitter and receiver introduces detrimental effects and limits the reliability and speed of information transfer between the source and destination. Typical channel effects encountered in mobile wireless communication systems include path loss between the transmitter and receiver, noise caused by the environment and electronics in the system, and fading caused by multiple paths and movement in the communication channel. In multiple access systems, different users cause interference in each other’s signals and adversely affect the system performance. To ensure reliable communication, methods to overcome channel effects must be devised and implemented in the system. Techniques used to improve system performance and capacity include temporal, frequency, polarisation and spatial diversity. This dissertation is concerned mainly with temporal or time diversity. Channel coding is a temporal diversity scheme and aims to improve the system error performance by adding structured redundancy to the transmitted message. The receiver exploits the redundancy to infer with greater accuracy which message was transmitted, compared with uncoded systems. Sparse graph codes are channel codes represented as sparse probabilistic graphical models which originated in artificial intelligence theory. These channel codes are described as factor graph structures with bit nodes, representing the transmitted codeword bits, and bit-constrained or check nodes. Each constraint involves only a small number of code bits, resulting in a sparse factor graph with far fewer connections between bit and check nodes than the maximum number of possible connections. Sparse graph codes are iteratively decoded using message passing or belief propagation algorithms. Three classes of iteratively decodable channel codes are considered in this study, including low-density parity-check (LDPC), Turbo and repeat-accumulate (RA) codes. The modulation platform presented in this dissertation is a spectrally efficient wideband system employing orthogonal complex spreading sequences (CSSs) to spread information sequences over a wider frequency band in multiple modulation dimensions. Special features of these spreading sequences include their constant envelopes and power output, providing communication range or device battery life advantages. This study shows that multiple layer modulation (MLM) can be used to transmit parallel data streams with improved spectral efficiency compared with single-layer modulation, providing data throughput rates proportional to the number of modulation layers at performances equivalent to single-layer modulation. Alternatively, multiple modulation layers can be used to transmit coded information to achieve improved error performance at throughput rates equivalent to a single layer system / Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007. / Electrical, Electronic and Computer Engineering / unrestricted
244

Fountain codes and their typical application in wireless standards like edge

Grobler, Trienko Lups 26 January 2009 (has links)
One of the most important technologies used in modern communication systems is channel coding. Channel coding dates back to a paper published by Shannon in 1948 [1] entitled “A Mathematical Theory of Communication”. The basic idea behind channel coding is to send redundant information (parity) together with a message to make the transmission more error resistant. There are different types of codes that can be used to generate the parity required, including block, convolutional and concatenated codes. A special subclass of codes consisting of the codes mentioned in the previous paragraph, is sparse graph codes. The structure of sparse graph codes can be depicted via a graphical representation: the factor graph which has sparse connections between its elements. Codes belonging to this subclass include Low-Density-Parity-Check (LDPC) codes, Repeat Accumulate (RA), Turbo and fountain codes. These codes can be decoded by using the belief propagation algorithm, an iterative algorithm where probabilistic information is passed to the nodes of the graph. This dissertation focuses on noisy decoding of fountain codes using belief propagation decoding. Fountain codes were originally developed for erasure channels, but since any factor graph can be decoded using belief propagation, noisy decoding of fountain codes can easily be accomplished. Three fountain codes namely Tornado, Luby Transform (LT) and Raptor codes were investigated during this dissertation. The following results were obtained: <ol> <li>The Tornado graph structure is unsuitable for noisy decoding since the code structure protects the first layer of parity instead of the original message bits (a Tornado graph consists of more than one layer).</li> <li> The successful decoding of systematic LT codes were verified.</li> <li>A systematic Raptor code was introduced and successfully decoded. The simulation results show that the Raptor graph structure can improve on its constituent codes (a Raptor code consists of more than one code).</li></ol> Lastly an LT code was used to replace the convolutional incremental redundancy scheme used by the 2G mobile standard Enhanced Data Rates for GSM Evolution (EDGE). The results show that a fountain incremental redundancy scheme outperforms a convolutional approach if the frame lengths are long enough. For the EDGE platform the results also showed that the fountain incremental redundancy scheme outperforms the convolutional approach after the second transmission is received. Although EDGE is an older technology, it still remains a good platform for testing different incremental redundancy schemes, since it was one of the first platforms to use incremental redundancy. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / MEng / unrestricted
245

Evaluation of a content download service based on FLUTE and LDPC for improving the Quality of Experience over multicast wireless networks

De Fez Lava, Ismael 17 April 2014 (has links)
Esta tesis estudia la distribución de ficheros en redes inalámbricas, analizando diferentes mecanismos que permiten optimizar la transmisión en términos de ancho de banda y calidad de experiencia. Concretamente, la tesis se centra en la transmisión de ficheros en canales multicast. Dicha transmisión resulta adecuada en ciertos entornos y tiene múltiples aplicaciones, algunas de las cuales se presentan en este trabajo. La tesis analiza en profundidad FLUTE (File Delivery over Unidirectional Transport), un protocolo para el envío fiable de ficheros en canales unidireccionales, y presenta algunas propuestas para mejorar la transmisión a través de dicho protocolo. En este sentido, una de las bases de este protocolo es el uso de un mecanismo llamado Tabla de Envío de Ficheros (FDT), que se utiliza para describir los contenidos transmitidos. Este trabajo analiza cómo la transmisión de la FDT afecta al funcionamiento del protocolo FLUTE, y proporciona una metodología para optimizar el envío de contenido mediante FLUTE. Por otro lado, en la transmisión de ficheros por multicast resulta esencial ofrecer un servicio fiable. Entre los distintos mecanismos utilizados por FLUTE para ofrecer fiabilidad, este trabajo analiza principalmente los códigos de corrección AL-FEC (Application Layer ¿ Forward Error Correction), los cuales añaden redundancia a la transmisión para minimizar los efectos de las pérdidas en el canal. Al respecto, esta tesis evalúa los códigos LDPC Staircase y LDPC Triangle, comparando su funcionamiento bajo diferentes condiciones de transmisión. Además, en el caso de tener un canal de retorno, una de las principales contribuciones de esta tesis es la propuesta de códigos LDPC adaptativos para servicios de descarga de ficheros. En esta clase de códigos, el servidor de contenidos cambia dinámicamente la cantidad de protección FEC proporcionada en función de las pérdidas que detectan los usuarios. La evaluación demuestra el buen funcionamiento de estos códigos en distintos entornos. / De Fez Lava, I. (2014). Evaluation of a content download service based on FLUTE and LDPC for improving the Quality of Experience over multicast wireless networks [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37051 / TESIS / Premios Extraordinarios de tesis doctorales
246

Enhanced Distance Measuring Equipment Data Broadcast Design, Analysis, Implementation, and Flight-Test Validation

Naab-Levy, Adam O. January 2015 (has links)
No description available.
247

Implementation and optimization of LDPC decoding algorithms tailored for Nvidia GPUs in 5G / Implementering och optimering av LDPC avkodningsalgoritmer anpassat för Nvidia GPU:er i 5G

Salomonsson, Benjamin January 2022 (has links)
Low-Density Parity-Check (LDPC) codes are linear error-correcting codes used to establish reliable communication between units on a noisy transmission channel in mobile telecommunications. LDPC algorithms detect and recover altered or corrupted message bits using sparse parity-check matrices in order to decipher messages correctly. LDPC codes have been shown to be fitting coding schemes for the fifth generation (5G) New Radio (NR), according to the third generation partnership project (3GPP).  TietoEvry, a consultant in telecom, has discovered that optimizations of LDPC decoding algorithms can be achieved/obtained with the use of a parallel computing platform called Compute Unified Device Architecture (CUDA), developed by NVIDIA. This platform utilizes the capabilities of a graphics processing unit (GPU) rather than a central processing unit (CPU), which in turn provides parallel computing. An optimized version of an LDPC decoding algorithm, the Min-Sum Algorithm (MSA), is implemented in CUDA and in C++ for comparison in terms of CPU execution time, to explore the capabilities that CUDA offers. The testing is done with a set of 12 sparse parity-check matrices and input-channel messages with different sizes. As a result, the CUDA implementation executes approximately 55% faster than a standard, unoptimized C++ implementation.
248

Generalized belief propagation based TDMR detector and decoder

Matcha, Chaitanya Kumar, Bahrami, Mohsen, Roy, Shounak, Srinivasa, Shayan Garani, Vasic, Bane 07 1900 (has links)
Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.
249

Application of Information Theory and Learning to Network and Biological Tomography

Narasimha, Rajesh 08 November 2007 (has links)
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
250

Experimental Studies On A New Class Of Combinatorial LDPC Codes

Dang, Rajdeep Singh 05 1900 (has links)
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.

Page generated in 0.2588 seconds