• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1439
  • 256
  • 208
  • 132
  • 65
  • 62
  • 51
  • 44
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 3021
  • 1008
  • 623
  • 387
  • 373
  • 305
  • 298
  • 296
  • 272
  • 271
  • 265
  • 264
  • 262
  • 258
  • 253
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Source and Channel Coding for Compressed Sensing and Control

Shirazinia, Amirpasha January 2014 (has links)
Rapid advances in sensor technologies have fueled massive torrents of data streaming across networks. Such large volume of information, indeed, restricts the operational performance of data processing, causing inefficiency in sensing, computation, communication and control. Hence, classical data processing techniques need to be re-analyzed and re-designed prior to be applied to modern networked data systems. This thesis aims to understand and characterize fundamental principles and interactions in and among sensing, compression, communication, computation and control, involved in networked data systems. In this regard, the thesis investigates four problems. The common theme is the design and analysis of optimized low-delay transmission strategies with affordable complexity for reliable communication of acquired data over networks with the objective of providing high quality of service for users. In the first three problems considered in the thesis, an emerging framework for data acquisition, namely, compressed sensing, is used which performs acquisition and compression simultaneously. The first problem considers the design of iterative encoding schemes, based on scalar quantization, for transmission of compressed sensing measurements over rate-limited links. Our approach is based on an analysis-by-synthesis principle, where the motivation is to reflect non-linearity in reconstruction, raised by compressed sensing, via synthesis, on choosing the best quantized value for encoding, via analysis. Our design shows significant reconstruction performance compared to schemes that only consider direct quantization of compressed sensing measurements. In the second problem, we investigate the design and analysis of encoding--decoding schemes, based on vector quantization, for transmission of compressed sensing measurements over rate-limited noisy links. In so realizing, we take an approach adapted from joint source-channel coding framework. We show that the performance of the studied system can approach the fundamental theoretical bound by optimizing the encoder-decoder pair. The price, however, is increased complexity at the encoder. To address the encoding complexity of the vector quantizer, we propose to use low-complexity multi-stage vector quantizer whose optimized design shows promising performance. In the third problem considered in the thesis, we take one step forward, and study joint source-channel coding schemes, based on vector quantization, for distributed transmission of compressed sensing measurements over noisy rate-limited links. We design optimized distributed coding schemes, and analyze theoretical bounds for such topology. Under certain conditions, our results reveal that the performance of the optimized schemes approaches the analytical bounds. In the last problem and in the context of control under communication constraints, we bring the notion of system dynamicity into the picture. Particularly, we study relations among stability in dynamical networked control systems, performance of real-time coding schemes and the coding complexity. For this purpose, we take approaches adapted from separate source-channel coding, and derive theoretical bounds on the performance of two types of coding schemes: dynamic repetition codes, and dynamic Fountain codes. We analytically and numerically show that the dynamic Fountain codes, over binary-input symmetric channels, with belief propagation decoding, are able to provide system stability in a networked control system. The results in the thesis evidently demonstrate that impressive performance gain is feasible by employing tools from communication and information theory to control and sensing. The insights offered through the design and analysis will also reveal fundamental pieces for understanding real-world networked data puzzle. / <p>QC 20140407</p>
152

Parallel VLSI Architectures for Multi-Gbps MIMO Communication Systems

Sun, Yang January 2011 (has links)
In wireless communications, the use of multiple antennas at both the transmitter and the receiver is a key technology to enable high data rate transmission without additional bandwidth or transmit power. Multiple-input multiple-output (MIMO) schemes are widely used in many wireless standards, allowing higher throughput using spatial multiplexing techniques. MIMO soft detection poses significant challenges to the MIMO receiver design as the detection complexity increases exponentially with the number of antennas. As the next generation wireless system is pushing for multi-Gbps data rate, there is a great need for high-throughput low-complexity soft-output MIMO detector. The brute-force implementation of the optimal MIMO detection algorithm would consume enormous power and is not feasible for the current technology. We propose a reduced-complexity soft-output MIMO detector architecture based on a trellis-search method. We convert the MIMO detection problem into a shortest path problem. We introduce a path reduction and a path extension algorithm to reduce the search complexity while still maintaining sufficient soft information values for the detection. We avoid the missing counter-hypothesis problem by keeping multiple paths during the trellis search process. The proposed trellis-search algorithm is a data-parallel algorithm and is very suitable for high speed VLSI implementation. Compared with the conventional tree-search based detectors, the proposed trellis-based detector has a significant improvement in terms of detection throughput and area efficiency. The proposed MIMO detector has great potential to be applied for the next generation Gbps wireless systems by achieving very high throughput and good error performance. The soft information generated by the MIMO detector will be processed by a channel decoder, e.g. a low-density parity-check (LDPC) decoder or a Turbo decoder, to recover the original information bits. Channel decoder is another very computational-intensive block in a MIMO receiver SoC (system-on-chip). We will present high-performance LDPC decoder architectures and Turbo decoder architectures to achieve 1+ Gbps data rate. Further, a configurable decoder architecture that can be dynamically reconfigured to support both LDPC codes and Turbo codes is developed to support multiple 3G/4G wireless standards. We will present ASIC and FPGA implementation results of various MIMO detectors, LDPC decoders, and Turbo decoders. We will discuss in details the computational complexity and the throughput performance of these detectors and decoders.
153

Enhancing the Performance of Relay Networks with Network Coding

Melvin, Scott Harold 02 August 2012 (has links)
This dissertation examines the design and application of network coding (NC) strategies to enhance the performance of communication networks. With its ability to combine information packets from different, previously independent data flows, NC has the potential to improve the throughput, reduce delay and increase the power efficiency of communication systems in ways that have not yet been fully utilized given the current lack of processing power at relay nodes. With these motivations in mind, this dissertation presents three main contributions that employ NC to improve the efficiency of practical communication systems. First, the integration of NC and erasure coding (EC) is presented in the context of wired networks. While the throughput gains from utilizing NC have been demonstrated, and EC has been shown to be an efficient means of reducing packet loss, these have generally been done independently. This dissertation presents innovative methods to combine these two techniques through cross-layer design methodologies. Second, three methods to reduce or limit the delay introduced by NC when deployed in networks with asynchronous traffic are developed. Also, a novel opportunistic approach of applying EC for improved data reliability is designed to take advantage of unused opportunities introduced by the delay reduction methods proposed. Finally, computationally efficient methods for the selection of relay nodes and the assignment of transmit power values to minimize the total transmit power consumed in cooperative relay networks with NC are developed. Adaptive power allocation is utilized to control the formation of the network topology to maximize the efficiency of the NC algorithm. This dissertation advances the efficient deployment of NC through its integration with other algorithms and techniques in cooperative communication systems within the framework of cross-layer protocol design. The motivation is that to improve the performance of communication systems, relay nodes will need to perform more intelligent processing of data units than traditional routing. The results presented in this work are applicable to both wireless and wired networks with real-time traffic which exist in such systems ranging from cellular and ad-hoc networks to fixed optical networks.
154

Multiview Video Compression

Bai, Baochun Unknown Date
No description available.
155

On Message Fragmentation, Coding and Social Networking in Intermittently Connected Networks

Altamimi, Ahmed B. 23 October 2014 (has links)
An intermittently connected network (ICN) is defined as a mobile network that uses cooperation between nodes to facilitate communication. This cooperation consists of nodes carrying messages from other nodes to help deliver them to their destinations. An ICN does not require an infrastructure and routing information is not retained by the nodes. While this may be a useful environment for message dissemination, it creates routing challenges. In particular, providing satisfactory delivery performance while keeping the overhead low is difficult with no network infrastructure or routing information. This dissertation explores solutions that lead to a high delivery probability while maintaining a low overhead ratio. The efficiency of message fragmentation in ICNs is first examined. Next, the performance of the routing is investigated when erasure coding and network coding are employed in ICNs. Finally, the use of social networking in ICNs to achieve high routing performance is considered. The aim of this work is to improve the better delivery probability while maintaining a low overhead ratio. Message fragmentation is shown to improve the CDF of the message delivery probability compared to existing methods. The use of erasure coding in an ICN further improve this CDF. Finally, the use of network coding was examined. The advantage of network coding over message replication is quantified in terms of the message delivery probability. Results are presented which show that network coding can improve the delivery probability compared to using just message replication. / Graduate / 0544 / 0984 / ahmedbdr@engr.uvic.ca
156

Multiview Video Compression

Bai, Baochun 11 1900 (has links)
With the progress of computer graphics and computer vision technologies, 3D/multiview video applications such as 3D-TV and tele-immersive conference become more and more popular and are very likely to emerge as a prime application in the near future. A successful 3D/multiview video system needs synergistic integration of various technologies such as 3D/multiview video acquisition, compression, transmission and rendering. In this thesis, we focus on addressing the challenges for multiview video compression. In particular, we have made 5 major contributions: (1) We propose a novel neighbor-based multiview video compression system which helps remove the inter-view redundancies among multiple video streams and improve the performance. An optimal stream encoding order algorithm is designed to enable the encoder to automatically decide the stream encoding order and find the best reference streams. (2) A novel multiview video transcoder is designed and implemented. The proposed multiview video transcoder can be used to encode multiple compressed video streams and reduce the cost of multiview video acquisition system. (3) A learning-based multiview video compression scheme is invented. The novel multiview video compression algorithms are built on the recent advances on semi-supervised learning algorithms and achieve compression by finding a sparse representation of images. (4) Two novel distributed source coding algorithms, EETG and SNS-SWC, are put forward. Both EETG and SNS-SWC are capable to achieve the whole Slepian-Wolf rate region and are syndrome-based schemes. EETG simplifies the code construction algorithm for distributed source coding schemes using extended Tanner graph and is able to handle mismatched bits at the encoder. SNS-SWC has two independent decoders and thus can simplify the decoding process. (5) We propose a novel distributed multiview video coding scheme which allows flexible rate allocation between two distributed multiview video encoders. SNS-SWC is used as the underlying Slepian-Wolf coding scheme. It is the first work to realize simultaneous Slepian-Wolf coding of stereo videos with the help of a distributed source code that achieves the whole Slepian-Wolf rate region. The proposed scheme has a better rate-distortion performance than the separate H.264 coding scheme in the high-rate case. / Computer Networks and Multimedia Systems
157

Improved processing techniques for picture sequence coding /

Choi, Koo-ting. January 1998 (has links)
Thesis (M. Phil.)--University of Hong Kong, 1999. / Includes bibliographical references.
158

Advanced techniques for video codec optimization /

Su, Yeping. January 2005 (has links)
Thesis (Ph. D.)-- University of Washington, 2005. / Vita. Includes bibliographical references (p. 115-118).
159

Μελέτη και υλοποίηση αλγορίθμων συμπίεσης

Γρίβας, Απόστολος 19 May 2011 (has links)
Σ΄αυτή τη διπλωματική εργασία μελετάμε κάποιους αλγορίθμους συμπίεσης δεδομένων και τους υλοποιούμε. Αρχικά, αναφέρονται βασικές αρχές της κωδικοποίησης και παρουσιάζεται το μαθηματικό υπόβαθρο της Θεωρίας Πληροφορίας. Παρουσιάζονται, επίσης διάφορα είδη κωδικών. Εν συνεχεία αναλύονται διεξοδικά η κωδικοποίηση Huffman και η αριθμητική κωδικοποίηση. Τέλος, οι δύο προαναφερθείσες κωδικοποιήσεις υλοποιούνται σε υπολογιστή με χρήση γλώσσας προγραμματισμού C και χρησιμοποιούνται για τη συμπίεση αρχείων κειμένου. Τα αρχεία που προκύπτουν συγκρίνονται με αρχεία που έχουν συμπιεστεί με χρήση προγραμμάτων του εμπορίου, αναλύονται τα αίτια των διαφορών στην αποδοτικότητα και εξάγονται χρήσιμα συμπεράσματα. / In this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
160

A novel fully progressive lossy-to-lossless coder for arbitrarily-connected triangle-mesh models of images and other bivariate functions

Guo, Jiacheng 16 August 2018 (has links)
A new progressive lossy-to-lossless coding method for arbitrarily-connected triangle mesh models of bivariate functions is proposed. The algorithm employs a novel representation of a mesh dataset called a bivariate-function description (BFD) tree, and codes the tree in an efficient manner. The proposed coder yields a particularly compact description of the mesh connectivity by only coding the constrained edges that are not locally preferred Delaunay (locally PD). Experimental results show our method to be vastly superior to previously-proposed coding frameworks for both lossless and progressive coding performance. For lossless coding performance, the proposed method produces the coded bitstreams that are 27.3% and 68.1% smaller than those generated by the Edgebreaker and Wavemesh methods, respectively. The progressive coding performance is measured in terms of the PSNR of function reconstructions generated from the meshes decoded at intermediate stages. The experimental results show that the function approximations obtained with the proposed approach are vastly superior to those yielded with the image tree (IT) method, the scattered data coding (SDC) method, the average-difference image tree (ADIT) method, and the Wavemesh method with an average improvement of 4.70 dB, 10.06 dB, 2.92 dB, and 10.19 dB in PSNR, respectively. The proposed coding approach can also be combined with a mesh generator to form a highly effective mesh-based image coding system, which is evaluated by comparing to the popular JPEG2000 codec for images that are nearly piecewise smooth. The images are compressed with the mesh-based image coder and the JPEG2000 codec at the fixed compression rates and the quality of the resulting reconstructions are measured in terms of PSNR. The images obtained with our method are shown to have a better quality than those produced by the JPEG2000 codec, with an average improvement of 3.46 dB. / Graduate

Page generated in 0.0953 seconds