• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Universal Source Coding in the Non-Asymptotic Regime

January 2018 (has links)
abstract: Fundamental limits of fixed-to-variable (F-V) and variable-to-fixed (V-F) length universal source coding at short blocklengths is characterized. For F-V length coding, the Type Size (TS) code has previously been shown to be optimal up to the third-order rate for universal compression of all memoryless sources over finite alphabets. The TS code assigns sequences ordered based on their type class sizes to binary strings ordered lexicographically. Universal F-V coding problem for the class of first-order stationary, irreducible and aperiodic Markov sources is first considered. Third-order coding rate of the TS code for the Markov class is derived. A converse on the third-order coding rate for the general class of F-V codes is presented which shows the optimality of the TS code for such Markov sources. This type class approach is then generalized for compression of the parametric sources. A natural scheme is to define two sequences to be in the same type class if and only if they are equiprobable under any model in the parametric class. This natural approach, however, is shown to be suboptimal. A variation of the Type Size code is introduced, where type classes are defined based on neighborhoods of minimal sufficient statistics. Asymptotics of the overflow rate of this variation is derived and a converse result establishes its optimality up to the third-order term. These results are derived for parametric families of i.i.d. sources as well as Markov sources. Finally, universal V-F length coding of the class of parametric sources is considered in the short blocklengths regime. The proposed dictionary which is used to parse the source output stream, consists of sequences in the boundaries of transition from low to high quantized type complexity, hence the name Type Complexity (TC) code. For large enough dictionary, the $\epsilon$-coding rate of the TC code is derived and a converse result is derived showing its optimality up to the third-order term. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
2

Information retrieval via universal source coding

Bae, Soo Hyun 17 November 2008 (has links)
This dissertation explores the intersection of information retrieval and universal source coding techniques and studies an optimal multidimensional source representation from an information theoretic point of view. Previous research on information retrieval particularly focus on learning probabilistic or deterministic source models based on primarily two different types of source representations, e.g., fixed-shape partitions or uniform regions. We study the limitations of the conventional source representations on capturing the semantics of the given multidimensional source sequences and propose a new type of primitive source representation generated by a universal source coding technique. We propose a multidimensional incremental parsing algorithm extended from the Lempel-Ziv incremental parsing and its three component schemes for multidimensional source coding. The properties of the proposed coding algorithm are exploited under two-dimensional lossless and lossy source coding. By the proposed coding algorithm, a given multidimensional source sequence is parsed into a number of variable-size patches. We call this methodology a parsed representation. Based on the source representation, we propose an information retrieval framework that analyzes a set of source sequences under a linguistic processing technique and implemented content-based image retrieval systems. We examine the relevance of the proposed source representation by comparing it with the conventional representation of visual information. To further extend the proposed framework, we apply a probabilistic linguistic processing technique to modeling the latent aspects of a set of documents. In addition, beyond the symbol-wise pattern matching paradigm employed in the source coding and the image retrieval systems, we devise a robust pattern matching that compares the first- and second-order statistics of source patches. Qualitative and quantitative analysis of the proposed framework justifies the superiority of the proposed information retrieval framework based on the parsed representation. The proposed source representation technique and the information retrieval frameworks encourage future work in exploiting a systematic way of understanding multidimensional sources that parallels a linguistic structure.
3

Spectrum Sensing in Cognitive Radios using Distributed Sequential Detection

Jithin, K S January 2013 (has links) (PDF)
Cognitive Radios are emerging communication systems which efficiently utilize the unused licensed radio spectrum called spectral holes. They run Spectrum sensing algorithms to identify these spectral holes. These holes need to be identified at very low SNR (<=-20 dB) under multipath fading, unknown channel gains and noise power. Cooperative spectrum sensing which exploits spatial diversity has been found to be particularly effective in this rather daunting endeavor. However despite many recent studies, several open issues need to be addressed for such algorithms. In this thesis we provide some novel cooperative distributed algorithms and study their performance. We develop an energy efficient detector with low detection delay using decentralized sequential hypothesis testing. Our algorithm at the Cognitive Radios employ an asynchronous transmission scheme which takes into account the noise at the fusion center. We have developed a distributed algorithm, DualSPRT, in which Cognitive Radios (secondary users) sequentially collect the observations, make local decisions and send them to the fusion center. The fusion center sequentially processes these received local decisions corrupted by Gaussian noise to arrive at a final decision. Asymptotically, this algorithm is shown to achieve the performance of the optimal centralized test, which does not consider fusion center noise. We also theoretically analyze its probability of error and average detection delay. Even though DualSPRT performs asymptotically well, a modification at the fusion node provides more control over the design of the algorithm parameters which then performs better at the usual operating probabilities of error in Cognitive Radio systems. We also analyze the modified algorithm theoretically. DualSPRT requires full knowledge of channel gains. Thus we extend the algorithm to take care the imperfections in channel gain estimates. We also consider the case when the knowledge about the noise power and channel gain statistic is not available at the Cognitive Radios. This problem is framed as a universal sequential hypothesis testing problem. We use easily implementable universal lossless source codes to propose simple algorithms for such a setup. Asymptotic performance of the algorithm is presented. A cooperative algorithm is also designed for such a scenario. Finally, decentralized multihypothesis sequential tests, which are relevant when the interest is to detect not only the presence of primary users but also their identity among multiple primary users, are also considered. Using the insight gained from binary hypothesis case, two new algorithms are proposed.

Page generated in 0.1072 seconds