• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 10
  • 9
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Opportunistic Scheduling, Cooperative Relaying and Multicast in Wireless Networks

January 2011 (has links)
abstract: This dissertation builds a clear understanding of the role of information in wireless networks, and devises adaptive strategies to optimize the overall performance. The meaning of information ranges from channel/network states to the structure of the signal itself. Under the common thread of characterizing the role of information, this dissertation investigates opportunistic scheduling, relaying and multicast in wireless networks. To assess the role of channel state information, the problem of opportunistic distributed opportunistic scheduling (DOS) with incomplete information is considered for ad-hoc networks in which many links contend for the same channel using random access. The objective is to maximize the system throughput. In practice, link state information is noisy, and may result in throughput degradation. Therefore, refining the state information by additional probing can improve the throughput, but at the cost of further probing. Capitalizing on optimal stopping theory, the optimal scheduling policy is shown to be threshold-based and is characterized by either one or two thresholds, depending on network settings. To understand the benefits of side information in cooperative relaying scenarios, a basic model is explored for two-hop transmissions of two information flows which interfere with each other. While the first hop is a classical interference channel, the second hop can be treated as an interference channel with transmitter side information. Various cooperative relaying strategies are developed to enhance the achievable rate. In another context, a simple sensor network is considered, where a sensor node acts as a relay, and aids fusion center in detecting an event. Two relaying schemes are considered: analog relaying and digital relaying. Sufficient conditions are provided for the optimality of analog relaying over digital relaying in this network. To illustrate the role of information about the signal structure in joint source-channel coding, multicast of compressible signals over lossy channels is studied. The focus is on the network outage from the perspective of signal distortion across all receivers. Based on extreme value theory, the network outage is characterized in terms of key parameters. A new method using subblock network coding is devised, which prioritizes resource allocation based on the signal information structure. / Dissertation/Thesis / Ph.D. Electrical Engineering 2011
12

A Source-Channel Separation Theorem with Application to the Source Broadcast Problem

Khezeli, Kia 11 1900 (has links)
A converse method is developed for the source broadcast problem. Specifically, it is shown that the separation architecture is optimal for a variant of the source broadcast problem and the associated source-channel separation theorem can be leveraged, via a reduction argument, to establish a necessary condition for the original problem, which uni es several existing results in the literature. Somewhat surprisingly, this method, albeit based on the source-channel separation theorem, can be used to prove the optimality of non-separation based schemes and determine the performance limits in certain scenarios where the separation architecture is suboptimal. / Thesis / Master of Applied Science (MASc)
13

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
14

Network compression via network memory: fundamental performance limits

Beirami, Ahmad 08 June 2015 (has links)
The amount of information that is churned out daily around the world is staggering, and hence, future technological advancements are contingent upon development of scalable acquisition, inference, and communication mechanisms for this massive data. This Ph.D. dissertation draws upon mathematical tools from information theory and statistics to understand the fundamental performance limits of universal compression of this massive data at the packet level using universal compression just above layer 3 of the network when the intermediate network nodes are enabled with the capability of memorizing the previous traffic. Universality of compression imposes an inevitable redundancy (overhead) to the compression performance of universal codes, which is due to the learning of the unknown source statistics. In this work, the previous asymptotic results about the redundancy of universal compression are generalized to consider the performance of universal compression at the finite-length regime (that is applicable to small network packets). Further, network compression via memory is proposed as a compression-based solution for the compression of relatively small network packets whenever the network nodes (i.e., the encoder and the decoder) are equipped with memory and have access to massive amounts of previous communication. In a nutshell, network compression via memory learns the patterns and statistics of the payloads of the packets and uses it for compression and reduction of the traffic. Network compression via memory, with the cost of increasing the computational overhead in the network nodes, significantly reduces the transmission cost in the network. This leads to huge performance improvement as the cost of transmitting one bit is by far greater than the cost of processing it.
15

JPEG 2000 and parity bit replenishment for remote video browsing

Devaux, François-Olivier 19 September 2008 (has links)
This thesis is devoted to the study of a compression and transmission framework for video. It exploits the JPEG 2000 standard and the coding with side information principles to enable an efficient interactive browsing of video sequences. During the last decade, we have witnessed an explosion of digital visual information as well as a significant diversification of visualization devices. In terms of viewing experience, many applications now enable users to interact with the content stored on a distant server. Pausing video sequences to observe details by zooming and panning or, at the opposite, browsing low resolutions of high quality HD videos are becoming common tasks. The video distribution framework envisioned in this thesis targets such devices and applications. Based on the conditional replenishment framework, the proposed system combines two complementary coding methods. The first one is JPEG 2000, a scalable and very efficient compression algorithm. The second method is based on the coding with side information paradigm. This technique is relatively novel in a video context, and has been adapted to the particular scalable image representation adopted in this work. Interestingly, it has been improved by integrating an image source model and by exploiting the temporal correlation inherent to the sequence. A particularity of this work is the emphasis on the system scalability as well as on the server complexity. The proposed browsing architecture can scale to handle large volumes of content and serve a possibly very large number of heterogeneous users. This is achieved by defining a scheduler that adapts its decisions to the channel conditions and to user requirements expressed in terms of computational capabilities and spatio-temporal interest. This scheduling is carried out in real-time at low computational cost and in a post-compression way, without re-encoding the sequences.
16

Peak-to-Average Power Reduction Schemes in SFBC MIMO-OFDM Systems without Side Information

Ciou, Ying-Chi 30 July 2012 (has links)
Selected mapping (SLM) is a well-known technique used to reduce the peak-to-average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems. Although SLM scheme can reduce PAPR efficiently, the side information (SI) must be transmitted to the receiver to indicate the candidate signal that generates the OFDM signal with the lowest PAPR. Robust channel coding schemes are typically adopted to prevent erroneous decoding of SI, leading to the lower bandwidth efficiency. To reduce PAPR efficiently and avoid the bandwidth efficiency loss caused by the transmission of SI, two novel PAPR reduction methods are proposed in SFBC MIMO-OFDM systems with two transmitter antennas that employs the Alamouti coding. The candidate signals are constructed in the frequency-domain and time-domain in the first proposed scheme and the second proposed scheme, respectively. In addition, the orthogonality of the space frequency block code is preserved resulting in the data recovery and the corresponding SI can be easily obtained from the conventional Alamouti detection method for both transmission methods. Simulation results show that the BER performance of a SFBC MIMO-OFDM system with the proposed SI detection algorithm is very close to that of perfect SI detection if the extension factor is larger than 1.3.
17

Robust watermarking techniques for stereoscopic video protection / Méthodes de tatouage robuste pour la protection de l’imagerie numerique 3D

Chammem, Afef 27 May 2013 (has links)
La multiplication des contenus stéréoscopique augmente les risques de piratage numérique. La solution technologique par tatouage relève ce défi. En pratique, le défi d’une approche de tatouage est d'atteindre l’équilibre fonctionnel entre la transparence, la robustesse, la quantité d’information insérée et le coût de calcul. Tandis que la capture et l'affichage du contenu 3D ne sont fondées que sur les deux vues gauche/droite, des représentations alternatives, comme les cartes de disparité devrait également être envisagée lors de la transmission/stockage. Une étude spécifique sur le domaine d’insertion optimale devient alors nécessaire. Cette thèse aborde les défis mentionnés ci-dessus. Tout d'abord, une nouvelle carte de disparité (3D video-New Three Step Search- 3DV-SNSL) est développée. Les performances des 3DV-NTSS ont été évaluées en termes de qualité visuelle de l'image reconstruite et coût de calcul. En comparaison avec l'état de l'art (NTSS et FS-MPEG) des gains moyens de 2dB en PSNR et 0,1 en SSIM sont obtenus. Le coût de calcul est réduit par un facteur moyen entre 1,3 et 13. Deuxièmement, une étude comparative sur les principales classes héritées des méthodes de tatouage 2D et de leurs domaines d'insertion optimales connexes est effectuée. Quatre méthodes d'insertion appartenant aux familles SS, SI et hybride (Fast-IProtect) sont considérées. Les expériences ont mis en évidence que Fast-IProtect effectué dans la nouvelle carte de disparité (3DV-NTSS) serait suffisamment générique afin de servir une grande variété d'applications. La pertinence statistique des résultats est donnée par les limites de confiance de 95% et leurs erreurs relatives inférieurs er <0.1 / The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1
18

Performance analysis of the IEEE 802.11A WLAN standard optimum and sub-optimum receiver in frequency-selective, slowly fading Nakagami channels with AWGN and pulsed noise jamming

Kalogrias, Christos 03 1900 (has links)
Approved for public release, distribution is unlimited / Wide local area networks (WLAN) are increasingly important in meeting the needs of next generation broadband wireless communications systems for both commercial and military applications. Under IEEE 802.11a 5GHz WLAN standard, OFDM was chosen as the modulation scheme for transmission because of its well-known ability to avoid multi-path effects while achieving high data rates. The objective of this thesis is to investigate the performance of the IEEE 802.11a WLAN standard receiver over flat fading Nakagami channels in a worst case, pulse-noise jamming environment, for the different combinations of modulation type (binary and non-binary modulation) and code rate specified by the WLAN standard. Receiver performance with Viterbi soft decision decoding (SDD) will be analyzed for additive white Gaussian noise (AWGN) alone and for AWGN plus pulse-noise jamming. Moreover, the performance of the IEEE 802.11a WLAN standard receiver will be examined both in the scenario where perfect side information is considered to be available (optimum receiver) and when it is not (sub-optimum receiver). In the sub-optimum receiver scenario, the receiver performance is examined both when noise-normalization is utilized and when it is not. The receiver performance is severely affected by the pulse-noise jamming environment, especially in the suboptimum receiver scenario. However, the sub-optimum receiver performance is significantly improved when noise-normalization is implemented. / Lieutenant, Hellenic Navy
19

Codage de sources avec information adjacente et connaissance incertaine des corrélations / Source coding with side information and uncertain correlation knowledge

Dupraz, Elsa 03 December 2013 (has links)
Dans cette thèse, nous nous sommes intéressés au problème de codage de sources avec information adjacente au décodeur seulement. Plus précisément, nous avons considéré le cas où la distribution jointe entre la source et l'information adjacente n'est pas bien connue. Dans ce contexte, pour un problème de codage sans pertes, nous avons d'abord effectué une analyse de performance à l'aide d'outils de la théorie de l'information. Nous avons ensuite proposé un schéma de codage pratique efficace malgré le manque de connaissance sur la distribution de probabilité jointe. Ce schéma de codage s'appuie sur des codes LDPC non-binaires et sur un algorithme de type Espérance-Maximisation. Le problème du schéma de codage proposé, c'est que les codes LDPC non-binaires utilisés doivent être performants. C'est à dire qu'ils doivent être construits à partir de distributions de degrés qui permettent d'atteindre un débit proche des performances théoriques. Nous avons donc proposé une méthode d'optimisation des distributions de degrés des codes LDPC. Enfin, nous nous sommes intéressés à un cas de codage avec pertes. Nous avons supposé que le modèle de corrélation entre la source et l'information adjacente était décrit par un modèle de Markov caché à émissions Gaussiennes. Pour ce modèle, nous avons également effectué une analyse de performance, puis nous avons proposé un schéma de codage pratique. Ce schéma de codage s'appuie sur des codes LDPC non-binaires et sur une reconstruction MMSE. Ces deux composantes exploitent la structure avec mémoire du modèle de Markov caché. / In this thesis, we considered the problem of source coding with side information available at the decoder only. More in details, we considered the case where the joint distribution between the source and the side information is not perfectly known. In this context, we performed a performance analysis of the lossless source coding scheme. This performance analysis was realized from information theory tools. Then, we proposed a practical coding scheme able to deal with the uncertainty on the joint probability distribution. This coding scheme is based on non-binary LDPC codes and on an Expectation-Maximization algorithm. For this problem, a key issue is to design efficient LDPC codes. In particular, good code degree distributions have to be selected. Consequently, we proposed an optimization method for the selection of good degree distributions. To finish, we considered a lossy coding scheme. In this case, we assumed that the correlation channel between the source and the side information is described by a Hidden Markov Model with Gaussian emissions. For this model, we performed again some performance analysis and proposed a practical coding scheme. The proposed scheme is based on non-binary LDPC codes and on MMSE reconstruction using an MCMC method. In our solution, these two components are able to exploit the memory induced by the Hidden Markov model.
20

Robust watermarking techniques for stereoscopic video protection

Chammem, Afef 27 May 2013 (has links) (PDF)
The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1

Page generated in 0.0761 seconds