• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 97
  • 48
  • 29
  • 21
  • 11
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 553
  • 101
  • 92
  • 88
  • 79
  • 64
  • 64
  • 62
  • 62
  • 57
  • 49
  • 48
  • 44
  • 42
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Noncoherent Demodulation with Viterbi Decoding for Partial Response Continuous Phase Modulation

Xingwen, Ding, Yumin, Zhong, Hongyu, Chang, Ming, Chen 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / With the characteristics of constant envelope and continuous phase, Continuous Phase Modulation (CPM) signal has higher spectrum efficiency and power efficiency than other modulation forms. A noncoherent demodulation with Viterbi decoding for partial response CPM signals is proposed. Simulation results indicate that the demodulation performance of proper partial response CPM is better than the traditional PCM-FM, which is a typical modulation of full response CPM. And higher spectral efficiency is also obtained by partial response CPM.
102

Optimized Constellation Mappings for Adaptive Decode-and-Forward Relay Networks using BICM-ID

Kumar, Kuldeep 10 1900 (has links)
In this paper, we investigate an adaptive decode-and-forward (DF) cooperative diversity scheme based on bit interleaved coded modulation with iterative decoding (BICM-ID). Data bits are first encoded by using a convolutional code and the coded bits after an interleaver are modulated before transmission. Iterative decoding is used at the receiver. Optimized constellation mapping is designed jointly for the source and the relay using a genetic algorithm. A novel error performance analysis for the adaptive DF scheme using BICM-ID is proposed. The simulation results agree well with the analytical results at high signal-to-noise ratio (SNR). More than 5.8 dB gain in terms of SNR over the existing mappings is achieved with proposed mappings.
103

Prototyping of MP3 decoding and playback on an ARM-based FPGA development board

Williams, Joel Thomas, 1979- 22 November 2010 (has links)
MP3, or MPEG-1 Layer 3, is the most widely-used format for storing compressed audio. MP3 is more advantageous than uncompressed audio (PCM), offering a much smaller size but without a noticeable loss in audio quality. This report will demonstrate decoding and playback of MP3 audio using a TLL5000 FPGA board. / text
104

Αρχιτεκτονικές υλικού για αποκωδικοποίηση Viterbi σε ασύρματα δίκτυα

Κυρίτσης, Κωνσταντίνος 10 June 2014 (has links)
Τα τελευταία χρόνια ο όγκος των δεδομένων που διακινείται μέσω δικτυακών συστημάτων είναι συνεχώς αυξανόμενος με την επιτακτική ανάγκη για αξιόπιστη επικοινωνία. Παρόλο που η εξέλιξη της τεχνολογίας επιτρέπει μεγαλύτερη ανοχή σε παρεμβολές στο τηλεπικοινωνιακό κανάλι, ο υψηλότερος ρυθμός δεδομένων προκαλεί παραμορφώσεις στο σήμα και κάνει το τηλεπικοινωνιακό σύστημα πιο επιρρεπές στον θόρυβο. Παράδειγμα τέτοιων συστημάτων αποτελούν εφαρμογές ασύρματων δικτύων όπως τα συστήματα κινητής τηλεφωνίας, οι δορυφορικές επικοινωνίες, ασύρματα τοπικά δίκτυα WiFi καθώς και ενσύρματων επικοινωνιών (ενσύρματα Modem). Η παρούσα διπλωματική επικεντρώνεται στα πρότυπα 802.11 που αφορούν ασύρματα τοπικά δίκτυα και πιο συγκεκριμένα στο πρόσφατο 802.11ac ώστε να τεθούν συγκεκριμένα κριτήρια απόδοσης. Αφενός γίνεται σχεδιασμός και υλοποίηση ενός αποκωδικοποιητή σύμφωνου με το πρότυπο λαμβάνοντας υπόψη περιορισμούς χρονισμού αλλά και επιφάνειας και επαλήθευση αυτών μέσω τεχνολογιών FPGA και ASIC. Αφετέρου μελετώνται διαφορετικές αρχιτεκτονικές υλοποίησης του αλγορίθμου (πχ high radix) και εξετάζονται τα πιθανά σχεδιαστικά trade-off. Εξετάζονται μέθοδοι αύξησης του throughput αλλά και θέματα απόδοσης που αφορούν την ικανότητα διόρθωσης λαθών. / In recent years the volume of data handled by network systems is growing with the need for reliable communication . Although the evolution of technology allows greater tolerance to interference in the telecommunication channel , higher data rate causes distortion to the signal and makes the telecommunication system more susceptible to noise. Examples of such systems are applications of wireless networks such as cellular systems , satellite communications , wireless local area networks WiFi and wired communications ( wired Modem). This thesis focuses on the 802.11 standards regarding wireless LANs and in particular in recent 802.11ac to put specific performance criteria. First is design and implementation of a decoder conforming to the model taking into account timing and surface constraints and verification technologies through FPGA and ASIC. Secondly, different architectures of the algorithm ( eg high radix) are studied and the possible design trade-off is examined. Methods of increasing the throughput and performance issues relating to error correction capability are examined.
105

Σχεδίαση αποκωδικοποιητή VLSI για κώδικες LDPC

Τσατσαράγκος, Ιωάννης 12 April 2010 (has links)
Η διόρθωση λαθών με κώδικες LDPC είναι μεγάλου ενδιαφέροντος σε σημαντικές νέες τηλεπικοινωνιακές εφαρμογές, όπως δορυφορικό Digital Video Broadcast (DVB) DVB-S2, IEEE 802.3an (10GBASE-T) και IEEE 802.16 (WiMAX). Οι κώδικες LDPC ανήκουν στην κατηγορία των γραμμικών μπλοκ κωδικών. Πρόκειται για κώδικες ελέγχου και διόρθωσης σφαλμάτων μετάδοσης, με κυριότερο χαρακτηριστικό τους τον χαμηλής πυκνότητας πίνακα ελέγχου ισοτιμίας (Low Density Parity Check), από τον οποίο και πήραν το όνομά τους. Η αποκωδικοποίηση γίνεται μέσω μιας επαναληπτικής διαδικασίας ανταλλαγής πληροφορίας μεταξύ δύο τύπων επεξεργαστικών μονάδων. Η υλοποίηση σε υλικό των LDPC αποκωδικοποιητών αποτελεί ένα ραγδαία εξελισσόμενο πεδίο για τη σύγχρονη επιστημονική έρευνα. Σκοπός της παρούσας διπλωματικής εργασίας υπήρξε ο σχεδιασμός, η υλοποίηση και η βελτιστοποίηση αρχιτεκτονικών αποκωδικοποιητών VLSI για κώδικες LDPC. Έχουν αναπτυχθεί διάφοροι αλγόριθμοι αποκωδικοποίησης, οι οποίοι είναι επαναληπτικοί. Μελετήθηκαν αρχιτεκτονικές βασισμένες σε δύο αλγόριθμους, τον log Sum-Product και τον Min-Sum. Ο πρώτος είναι θεωρητικά βέλτιστος, αλλά ο Min-Sum είναι αρκετά απλούστερος και έχει μεγαλύτερο πρακτικό ενδιαφέρον στα πλαίσια μιας ρεαλιστικής εφαρμογής. Συγκεκριμένα, αναπτύχθηκαν δύο αλγόριθμοι αποκωδικοποίησης, οι οποίοι χρησιμοποιούν ως δομικά στοιχεία, τους δύο προαναφερθέντες αλγορίθμους και τη φιλοσοφία του layered decoding. Η μελέτη μας επικεντρώθηκε σε κώδικες, η δομή των πινάκων ελέγχου ισοτιμίας των οποίων, προσφέρεται για υλοποίηση. Για αυτό το λόγο, χρησιμοποιήσαμε κώδικες του προτύπου WiMax 802.16e. Η συνεισφορά της παρούσας εργασίας έγκειται στο σχεδιασμό και την υλοποίηση αποδοτικών αρχιτεκτονικών σε επίπεδο επιφάνειας και ταχύτητας αποκωδικοποίησης (Mbps), καθώς και η διερεύνηση του σχετικού σχεδιαστικού χώρου, χρησιμοποιώντας ως σχεδιαστικές παραμέτρους, τον αλγόριθμο αποκωδικοποίησης, τη χρονοδρομολόγηση των πράξεων, το βαθμό παραλληλίας της αρχιτεκτονικής, το βάθος του pipelining και την αριθμητική αναπαράσταση των δεδομένων. Επιπλέον, είναι σημαντικό να αναφέρουμε πως, στα πλαίσια της σχεδίασης του LDPC αποκωδικοποιητή και με τη βοήθεια του εργαλείου Matlab, αναπτύχθηκαν παραμετρικά scripts για την παραγωγή του VHDL κώδικα. Οι δύο βασικές παράμετροι που χρησιμοποιήθηκαν ήταν το πλήθος των επεξεργαστικών μονάδων και το μήκος λέξης των δεδομένων. Τα scripts αυτά αποτέλεσαν ένα πολύ χρήσιμο εργαλείο κατά τη διαδικασία ανάπτυξης και βελτιστοποίησης της αρχιτεκτονικής, δίνοντας μας τη δυνατότητα να παράγουμε με αυτοματοποιημένο και γρήγορο τρόπο τον VHDL κώδικα, για τις επιμέρους μονάδες του αποκωδικοποιητή. Η υλοποίηση ενός μοντέλου αποκωδικοποιητή σε υλικό, μας δίνει τη δυνατότητα να διεξάγουμε ταχύτατες εξομοιώσεις, σε σχέση με αντίστοιχες υλοποιήσεις σε λογισμικό (π.χ. σε Matlab περιβάλλον). Διαθέτουμε, έτσι, ένα ισχυρό εργαλείο για τη μελέτη της επίδοσης διαφόρων ρεαλιστικών υλοποιήσεων αποκωδικοποιητών. Κατά τη διάρκεια της υλοποίησης, αξιοποιήθηκε αναπτυξιακό σύστημα βασισμένο σε virtex-4 fpga. / LDPC (low-density parity-check) codes are widely applied for error correction, in the development of highly efficient modern digital communication systems, as satellite Digital Video Broadcast (DVB) DVB-S2, IEEE 802.3an (10GBASE-T) and IEEE 802.16 (WiMax). LDPC codes are linear block codes, characterized by a sparse parity-check matrix. They are error detection and correction codes. The most typical decoding procedure is the message passing algorithm that implements the iterative exchange of node-generated messages between two types of processing units, called check and variable nodes. Hardware implementation of an LDPC decoder is a fast growing field for contemporary scientific research. This work presents the results of the design, implementation and optimization of a VLSI decoder for LDPC codes. Several iterative decoding algorithms have been developed. At this work we present architectures based on the log Sum-Product (Log-SP) and Min-Sum algorithm. Log-SP is theoretically optimal; however Min-Sum is substantially simpler and reduces the hardware complexity. Two alternative decoding algorithms have been developed, that use these two algorithms for the check-node LLR update, and the philosophy of layered decoding for the exchange of messages. Our study focused on WiMax 801.16e LDPC codes, whose form, based on permuted identity matrices, is suitable for a hardware realization. The contribution of this work lays within the design and implementation of area and decoding throughput efficient architectures, as well a detailed investigation of design space, using decoding algorithm, message exchange scheduling, pipelining and quantization schemes as design parameters. Furthermore, important to mention is, -the development of parametric Matlab scripts, in order to achieve easy and automated structural VHDL code production. The two key parameters are the number of the processing units and the data length. A hardware realization of a LDPC decoder, gives us a simulation tool that is much faster than corresponding software implementations (for example, a matlab implementation). During the implementation procedure, development board based in virtex-4 fpga has been used.
106

Compressive Measurement of Spread Spectrum Signals

Liu, Feng January 2015 (has links)
Spread Spectrum (SS) techniques are methods used in communication systems where the spectra of the signal is spread over a much wider bandwidth. The large bandwidth of the resulting signals make SS signals difficult to intercept using conventional methods based on Nyquist sampling. Recently, a novel concept called compressive sensing has emerged. Compressive sensing theory suggests that a signal can be reconstructed from much fewer measurements than suggested by the Shannon Nyquist theorem, provided that the signal can be sparsely represented in a dictionary. In this work, motivated by this concept, we study compressive approaches to detect and decode SS signals. We propose compressive detection and decoding systems based both on random measurements (which have been the main focus of the CS literature) as well as designed measurement kernels that exploit prior knowledge of the SS signal. Compressive sensing methods for both Frequency-Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) systems are proposed.
107

Multiple symbol decoding of differential space-time codes

Singhal, Rohit 30 September 2004 (has links)
Multiple-symbol detection of space-time differential codes (MS-STDC) decodes N consecutive space-time symbols using maximum likelihood (ML) sequence detection to gain in performance over the conventional differential detection scheme. However its computational complexity is exponential in N . A fast algorithm for implementing the MD-STDC in block-fading channels with complexity O(N 4) is developed. Its performance in both block-fading and symbol-by-symbol fading channels is demonstrated through simulations. Set partitioning in hierarchical trees (SPIHT) coupled with rate compatible punctured convolution code (RCPC) and cyclic redundancy check (CRC) is employed as a generalized multiple description source coder with robustness to channel errors. We propose a serial concatenation of the above with a differential space-time code (STDC) and invoke an iterative joint source channel decoding procedure for decoding differentially space-time coded multiple descriptions. Experiments show a gain of up to 5 dB in PSNR with four iterations for image transmission in the absence of channel state information (CSI) at the receiver. A serial concatenation of SPIHT + RCPC/CRC is also considered with space-time codes (STC) instead of STDC. Experiments show a gain of up to 7 dB with four iterations in the absence of CSI
108

Comparison of CELP speech coder with a wavelet method

Nagaswamy, Sriram 01 January 2006 (has links)
This thesis compares the speech quality of Code Excited Linear Predictor (CELP, Federal Standard 1016) speech coder with a new wavelet method to compress speech. The performances of both are compared by performing subjective listening tests. The test signals used are clean signals (i.e. with no background noise), speech signals with room noise and speech signals with artificial noise added. Results indicate that for clean signals and signals with predominantly voiced components the CELP standard performs better than the wavelet method but for signals with room noise the wavelet method performs much better than the CELP. For signals with artificial noise added, the results are mixed depending on the level of artificial noise added with CELP performing better for low level noise added signals and the wavelet method performing better for higher noise levels.
109

Soft MIMO Detection on Graphics Processing Units and Performance Study of Iterative MIMO Decoding

Arya, Richeek 2011 August 1900 (has links)
In this thesis we have presented an implementation of soft Multi Input Multi Output (MIMO) detection, single tree search algorithm on Graphics Processing Units (GPUs). We have compared its performance on different GPUs and a Central Processing Unit (CPU). We have also done a performance study of iterative decoding algorithms. We have shown that by increasing the number of outer iterations error rate performance can be further improved. GPUs are specialized devices specially designed to accelerate graphics processing. They are massively parallel devices which can run thousands of threads simultaneously. Because of their tremendous processing power there is an increasing interest in using them for scientific and general purpose computations. Hence companies like Nvidia, Advanced Micro Devices (AMD) etc. have started their support for General Purpose GPU (GPGPU) applications. Nvidia came up with Compute Unified Device Architecture (CUDA) to program its GPUs. Efforts are made to come up with a standard language for parallel computing that can be used across platforms. OpenCL is the first such language which is supported by all major GPU and CPU vendors. MIMO detector has a high computational complexity. We have implemented a soft MIMO detector on GPUs and studied its throughput and latency performance. We have shown that a GPU can give throughput of up to 4Mbps for a soft detection algorithm which is more than sufficient for most general purpose tasks like voice communication etc. Compare to CPU a throughput increase of ~7x is achieved. We also compared the performances of two GPUs one with low computational power and one with high computational power. These comparisons show effect of thread serialization on algorithms with the lower end GPU's execution time curve shows a slope of 1/2. To further improve error rate performance iterative decoding techniques are employed where a feedback path is employed between detector and decoder. With an eye towards GPU implementation we have explored these algorithms. Better error rate performance however, comes at a price of higher power dissipation and more latency. By simulations we have shown that one can predict based on the Signal to Noise Ratio (SNR) values how many iterations need to be done before getting an acceptable Bit Error Rate (BER) and Frame Error Rate (FER) performance. Iterative decoding technique shows that a SNR gain of ~1:5dB is achieved when number of outer iterations is increased from zero. To reduce the complexity one can adjust number of possible candidates the algorithm can generate. We showed that where a candidate list of 128 is not sufficient for acceptable error rate performance for a 4x4 MIMO system using 16-QAM modulation scheme, performances are comparable with the list size of 512 and 1024 respectively.
110

Phonological representations, phonological awareness, and print decoding ability in children with moderate to severe speech impairment

Sutherland, Dean Edward January 2006 (has links)
The development of reading competency is one of the most significant pedagogical achievements during the first few years of schooling. Although most children learn to read successfully when exposed to reading instruction, up to 18% of children experience significant reading difficulty (Shaywitz, 1998). As a group, young children with speech impairment are at risk of reading impairment, with approximately 50% of these children demonstrating poor acquisition of early reading skills (Nathan, Stackhouse, Goulandris, & Snowling, 2004; Larivee & Catts, 1999). A number of variables contribute to reading outcomes for children with speech impairment including co-occurring language impairment, the nature and severity of their speech impairment as well as social and cultural influences. An area of research that has received increasing attention is understanding how access to the underlying sound structure or phonological representations of spoken words stored in long-term memory account for reading difficulties observed in children (Elbro, 1996; Fowler, 1991). Researchers have hypothesised that children with speech impairment may be at increased risk of reading disability due to deficits at the level of phonological representations (Bird, Bishop, & Freeman, 1995). Phonological representation deficits can manifest in poor performance on tasks that require children to think about the sound structure of words. Knowledge about the phonological components of words is commonly referred to as phonological awareness. Identifying and manipulating phonemes within words are examples of phonological awareness skills. Some children with speech impairment perform poorly on phonological awareness measures compared to children without speech difficulties (Bird et al., 1995; Carroll & Snowling, 2004; Rvachew, Ohberg, Grawburg, & Heyding, 2003). As performance on phonological awareness tasks is a strong predictor of early reading ability (Hogan, Catts, & Little, 2005), there is an important need to determine if children with speech impairment who demonstrate poor phonological awareness, have deficits at the level of phonological representations. This thesis reports a series of studies that investigated the relationship between phonological representations, phonological awareness, and word decoding ability in children with moderate to severe speech impairment. A child with complex communication needs (CCN) who used Augmentative and Alternative Communication (AAC) was also examined to determine how the absence of effective articulation skills influences the development of phonological representations. The study employed a longitudinal design to compare the performance of nine children (aged 3:09-5:03 at initial assessment) with moderate to severe speech impairment and 17 children with typical speech development on novel assessment measures designed to determine characteristics of children's phonological representations. The tasks required children to judge the accuracy of spoken multisyllable words and newly learned nonwords. The relationships between performance on these tasks and measures of speech, phonological awareness and early print decoding were also examined. Four assessment trials were implemented at six-monthly intervals over an 18-month period. The first assessment trial was administered approximately 6 to12 months before children commenced school. The fourth trial was administered after children had completed 6 to 12 months of formal education. The child with CCN completed three assessment trials over a period of 16 months. Data analyses revealed that the children with speech impairment had significantly greater difficulty (p<0.01) judging mispronounced multisyllable words compared to their peers with typical speech development. As a group, children with speech impairment also demonstrated inferior performance on the judgment of mispronounced forms of newly learned nonwords (p<0.05). No group differences were observed on the judgment of correctly pronounced real and nonword stimuli. Significant group differences on speech production and phoneme segmentation tasks were identified at each assessment trial. Moderate to high correlations (i.e., r = 0.40 to 0.70) were also observed between performance on the phonological representation tasks and performance on phonological awareness and speech production measures at each trial across the study. Although no significant group differences were observed on the nonword decoding task, 4 of the 9 children with speech impairment could not decode any letters in nonwords (compared to only 1 child without speech impairment) at the final assessment trial when children were 6-years-old. Two children with speech impairment showed superior nonword decoding ability at trial 3 and 4. The within-group variability observed on the nonword decoding task highlighted the heterogeneity of children with speech impairment. The performances of four children with speech impairment with differing types of speech error patterns were analysed to investigate the role of phonological representations in their speech and phonological awareness development. The child with delayed speech development and excellent phonological awareness at trial 1, demonstrated superior phonological awareness and word decoding skills at age 6 years, although his performance on phonological representation tasks was inconsistent across trials. In contrast, a child with delayed development and poor early phonological awareness demonstrated weak performance on phonological representation, phonological awareness, and decoding at each successive assessment trial. The child with a high percentage of inconsistent speech error patterns generally demonstrated poor performance on phonological representation, phonological awareness and decoding measures at each of the 4 assessment trials. The child with consistent and unusual speech error patterns showed increasingly stronger performance on the phonological representation tasks and average performance on phonological awareness but limited word decoding ability at age 6. The 11-year-old girl with CCN, whose speech attempts were limited and unintelligible, demonstrated below average performance on phonological representation tasks, suggesting that an absence of articulatory feedback may negatively influence the development of well-specified phonological representations. This thesis provides evidence for the use of receptive tasks to identify differences in the phonological representations of children with and without speech impairment. The findings also provide support for the link between the representation of phonological information in long-term memory and children's speech production accuracy, phonological awareness and print decoding ability. The variable performance of some children with speech impairment and the child with cerebral palsy demonstrate the need to consider individual characteristics to develop an understanding of how children store and access speech sound information to assist their acquisition of early reading skills.

Page generated in 0.0435 seconds