• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 8
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Unequal Error Protection on SLCCA Image Encoded Bit Stream

Li, Chien-Hao 30 June 2002 (has links)
In SLCCA , the location and magnitude of significant coefficients are specified by the so-called significance map and magnitude respectively . As we know significance map is susceptible , error will propagate when data was deteriorated . This paper address this critical problem and provide an novel approach . In the significance map , the importance of data is interlaced . And our approach is to re-organize the significant map according to encoded symbol¡¦s characteristic . In SLCCA , four symbols are used to encode : POS , NEG , ZERO , LINK . POS or NEG represents the sign of a significant coefficient . ZERO represents an insignificant coefficient . LINK marks the presence of a significance-link . Symbol LINK is more important than POS NEG ZERO . Because when error happen in symbol LINK , it will lead to propagation error . Re-organized data is protected by differRS code . More important data are allocated more parity symbols .
2

JOINT SOURCE/CHANNEL CODING FOR TRANSMISSION OF MULTIPLE SOURCES

Wu, Zhenyu, Bilgin, Ali, Marcellin, Michael W. 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / A practical joint source/channel coding algorithm is proposed for the transmission of multiple images and videos to reduce the overall reconstructed source distortion at the receiver within a given total bit rate. It is demonstrated that by joint coding of multiple sources with such an objective, both improved distortion performance as well as reduced quality variation can be achieved at the same time. Experimental results based on multiple images and video sequences justify our conclusion.
3

OPTIMIZATION OF RATELESS CODED SYSTEMS FOR WIRELESS MULTIMEDIA MULTICAST

CAO, YU 13 June 2011 (has links)
Rateless codes, also known as fountain codes, are a class of erasure error-control codes that are particularly well suited for broadcast/multicast systems. Raptor codes, as a particularly successful implementation of digital fountain codes, have been used as the application layer forward error correction (FEC) codes in the third generation partnership program (3GPP) Multimedia Broadcast and Multicast Services (MBMS) standard. However, the application of rateless codes to wireless multimedia broadcast/multicast communications has yet to overcome two major challenges: first, wireless multimedia communications usually has stringent delay requirements. In addition, multimedia multicast has to overcome heterogeneity. To meet these challenges, we propose a rateless code design that takes the layered nature of source traffic as well as the varying quality of transmission channels into account. A convex optimization framework for the application of unequal error protection (UEP) rateless codes to synchronous and asynchronous multimedia multicast to heterogeneous users is proposed. A second thread of the thesis addresses the noisy, bursty and time- varying nature of wireless communication channels that challenge the assumption of erasure channels often used for the wired internet. In order to meet this challenge, the optimal combination of application-layer rateless code and physical layer FEC code rates in time-varying fading channels is investigated. The performance of rateless codes in hybrid error-erasure channels with memory is then studied, and a cross-layer decoding method is proposed to improve decoding performance and complexity. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2011-06-12 16:26:36.136
4

Error resilience in JPEG2000

Natu, Ambarish Shrikrishna, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2003 (has links)
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
5

Optimum bit-by-bit power allocation for minimum distortion transmission

Karaer, Arzu 25 April 2007 (has links)
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show that up to 3 dB MSE gain can be obtained by changing the power allocation on the information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits have the same power, and the two power levels can be different.
6

Performance of Single Layer H.264 SVC Video Over Error Prone Networks

January 2011 (has links)
abstract: With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream. / Dissertation/Thesis / M.S. Electrical Engineering 2011
7

Estudo do emaranhamento quantico com base na teoria da codificação cloassica / Analysis of quantum entanglement based on classical coding theory

Gazzoni, Wanessa Carla 15 August 2008 (has links)
Orientadores: Reginaldo Palazzo Junior, Carlile Lavor / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-11T20:14:12Z (GMT). No. of bitstreams: 1 Gazzoni_WanessaCarla_D.pdf: 915784 bytes, checksum: d9b26e53c10c74a95fabe11a016027ce (MD5) Previous issue date: 2008 / Resumo: Este trabalho apresenta algumas contribuições para um melhor entendimento do emaranhamento quântico e suas aplicações. Com o propósito de obter a classificação de estados quânticos puros arbitrários em separáveis ou emaranhados, apresentamos um critério de separabilidade do qual tal classificação decorre. Este critério está baseado em uma interpretação homológicageométrica, que nos permitiu formalizar algumas conclusões acerca da quantificação do emaranhamento em estados puros arbitrários com três qubits. A partir desta interpretação, foi possível também associar a descriçãao do conteúdo dos kets de um estado puro arbitrário a conceitos de teoria da codificação clássica. Tendo como base esta associação, propomos uma forma bastante simplificada para determinar a descrição matemática de estados puros arbitrários que satisfazem o máximo emaranhamento global. De acordo com conceitos da teoria da codificação, analisamos os estados de máximo emaranhamento global com relaçãoo 'a proteção contra erros que esses estados possuem. Neste contexto, apresentamos uma nova classe de estados que ainda Não havia sido mencionada na literatura. / Abstract: In this thesis we present some contributions to a better understanding of quantum entanglement and its applications. With the purpose of obtaining a classification of the arbitrary pure quantum states as separable or entangled, a separability criterion is presented. This criterion is based on an homologic-geometric interpretation which allowed us to formalize some conclusions on the entanglement quantification of arbitrary pure states with three qubits. From this interpretation, it was possible to associate a description of the kets' content of an arbitrary pure state with the concepts of the classical coding theory. Based on this association, we propose a simplified form to determine a mathematical description of arbitrary quantum states satisfying the maximum global entanglement. From the concepts of coding theory we considered the states of maximum global entanglement with respect to its inherent error protection. In this context, we present a new class of states satisfying all the previous properties and which were not known in the open literature. / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
8

Implementation And Performance Analysis Of The Dvb-t Standard System

Yuksekkaya, Mehmet 01 November 2005 (has links) (PDF)
Terrestrial Digital Video Broadcasting (DVB-T) is a standard for wireless broadcast of MPEG-2 video. DVB-T is based on channel coding algorithms and uses Orthogonal Frequency Division Multiplexing (OFDM) as a modulation scheme. In this thesis, we have implemented the standard of ETSI EN 300 744 for Digital Video Broadcasting in MATLAB. This system is composed of the certain blocks which include OFDM modulation, channel estimation, channel equalization, frame synchronization, error-protection coding, to name a few of such blocks. We have investigated the performance of the complete system for different wireless broadcast impairments. In this performance analysis, we have considered Rayleigh fading multi-path channels with Doppler shift and framing synchronization errors and obtained the bit error rate (BER), and channel minimum square error performances versus different maximum Doppler shift values, different channel equalization techniques and different channel estimation algorithms. Furthermore, we have investigated different interpolations methods for the interpolation of channel response. It is shown that minimum mean-square error (MMSE) type equalization has a better performance in symbol estimation compared to zero forcing (ZF) equalizer. Also linear interpolation in time and low pass frequency interpolation, for time frequency interpolation of channel response can be used for practical application.
9

Error resilient video communications using high level M-QAM : modelling and simulation of a comparative analysis of a dual-priority M-QAM transmission system for H.264/AVC video applications over band-limited and error-phone channels

Abdurrhman, Ahmed B. M. January 2010 (has links)
An experimental investigation of an M level (M = 16, 64 and 256) Quadrature Amplitude Modulation (QAM) transmission system suitable for video transmission is presented. The communication system is based on layered video coding and unequal error protection to make the video bitstream robust to channel errors. An implementation is described in which H.264 video is protected unequally by partitioning the compressed data into two layers of different visual importance. The partition scheme is based on a separation of the group of pictures (GoP) in the intra-coded frame (I-frame) and predictive coded frame (P frame). This partition scheme is then applied to split the H.264-coded video bitstream and is suitable for Constant Bit Rate (CBR) transmission. Unequal error protection is based on uniform and non-uniform M-QAM constellations in conjunction with different scenarios of splitting the transmitted symbol for protection of the more important information of the video data; different constellation arrangements are proposed and evaluated to increase the capacity of the high priority layer. The performance of the transmission system is evaluated under Additive White Gaussian Noise (AWGN) and Rayleigh fading conditions. Simulation results showed that in noisy channels the decoded video can be improved by assigning a larger portion of the video data to the enhancement layer in conjunction with non-uniform constellation arrangements; in better channel conditions the quality of the received video can be improved by assigning more bits in the high priority channel and using uniform constellations. The aforementioned varying conditions can make the video transmission more successful over error-prone channels. Further techniques were developed to combat various channel impairments by considering channel coding methods suitable for layered video coding applications. It is shown that a combination of non-uniform M-QAM and forward error correction (FEC) will yield a better performance. Additionally, antenna diversity techniques are examined and introduced to the transmission system that can offer a significant improvement in the quality of service of mobile video communication systems in environments that can be modelled by a Rayleigh fading channel.
10

Error relilient video communications using high level M-QAM. Modelling and simulation of a comparative analysis of a dual-priority M-QAM transmission system for H.264/AVC video applications over band-limited and error-phone channels.

Abdurrhman, Ahmed B.M. January 2010 (has links)
An experimental investigation of an M level (M = 16, 64 and 256) Quadrature Amplitude Modulation (QAM) transmission system suitable for video transmission is presented. The communication system is based on layered video coding and unequal error protection to make the video bitstream robust to channel errors. An implementation is described in which H.264 video is protected unequally by partitioning the compressed data into two layers of different visual importance. The partition scheme is based on a separation of the group of pictures (GoP) in the intra-coded frame (I-frame) and predictive coded frame (P frame). This partition scheme is then applied to split the H.264-coded video bitstream and is suitable for Constant Bit Rate (CBR) transmission. Unequal error protection is based on uniform and non-uniform M-QAM constellations in conjunction with different scenarios of splitting the transmitted symbol for protection of the more important information of the video data; different constellation arrangements are proposed and evaluated to increase the capacity of the high priority layer. The performance of the transmission system is evaluated under Additive White Gaussian Noise (AWGN) and Rayleigh fading conditions. Simulation results showed that in noisy channels the decoded video can be improved by assigning a larger portion of the video data to the enhancement layer in conjunction with non-uniform constellation arrangements; in better channel conditions the quality of the received video can be improved by assigning more bits in the high priority channel and using uniform constellations. The aforementioned varying conditions can make the video transmission more successful over error-prone channels. Further techniques were developed to combat various channel impairments by considering channel coding methods suitable for layered video coding applications. It is shown that a combination of non-uniform M-QAM and forward error correction (FEC) will yield a better performance. Additionally, antenna diversity techniques are examined and introduced to the transmission system that can offer a significant improvement in the quality of service of mobile video communication systems in environments that can be modelled by a Rayleigh fading channel.

Page generated in 0.1169 seconds