• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1446
  • 256
  • 208
  • 132
  • 65
  • 62
  • 51
  • 44
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 3027
  • 1009
  • 624
  • 387
  • 373
  • 306
  • 298
  • 296
  • 272
  • 271
  • 265
  • 265
  • 263
  • 258
  • 254
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Power-efficient design methodology for video decoding. / CUHK electronic theses & dissertations collection

January 2007 (has links)
As a proof of concept, the presented power-efficient design methodology is experimentally verified on a H.264/AVC baseline decoding system. A prototype chip is fabricated in UMC 0.18mum 1P6M standard CMOS technology. It is capable to decode H.264/AVC baseline profile of QCIF at 30fps. The chip contains 169k gates and 2.5k bytes on-chip SRAM with 4.5mmx4.5mm chip area. It dissipates 293muW at 1.0V and 973muW at 1.8V during realtime video decoding. Compared with conventional designs, the measured power consumption is reduced up to one order of magnitude. / CMOS technology has now entered "power-limited scaling regime", where power consumption moves from being one of many design metrics to being the number one design metric. However, rapid advances of multimedia entertainment pose more stringent constraints on power dissipation mainly due to the increased video quality. Although general power-efficient design techniques have been formed for several years, no literature studied how to systematically apply them on a specific application like video decoding. Besides these general methods, video decoding has its unique power optimization entries due to temporal, spatial, and statistical redundancy in digital video data. / This research focuses on a systematic way to exploit power saving potentials spanning all design levels for real-time video decoding. At the algorithm level, the computational complexity and data width are optimized. At the architectural level, pipelining and parallelism are widely adopted to reduce the operating frequency; distributed processing greatly helps to reduce the number of global communications; hierarchical memory organization moves great part of data access from larger or external memories to smaller ones. At the circuit level, resource sharing reduces total switching capacitance by multi-function reconfigurations; the knowledge about signal statistics is exploited to reduce the number of transitions; data dependent signal-gating and clock-gating are introduced which are dynamic techniques to for power reduction; multiplications, which account for large chip area and switching power, are reduced to minimum through proper transformations, while complex dividers are totally eliminated. At the transistor and physical design level, cell sizing and layout are optimized for power-efficiency purpose. The higher levels, like algorithm and architecture, contribute to larger portion of power reduction, while the lower levels, like transistor and physical, further reduce power where high level techniques are not applicable. / Xu, Ke. / "September 2007." / Adviser: Chui-Sing Choy. / Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4952. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 239-247). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
192

Lossless video multiplexing for transport over communication networks.

January 1997 (has links)
by Chan Hang Fung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 62-68). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of video transmission --- p.1 / Chapter 1.2 --- Previous work on lossless video transmission --- p.4 / Chapter 1.3 --- Central theme of thesis ´ؤ Lossless video Aggregation --- p.5 / Chapter 1.4 --- Organization of thesis --- p.9 / Chapter 2 --- Framework of LVAS --- p.11 / Chapter 2.1 --- Review: Transporting single VBR stream using a CBR channel --- p.11 / Chapter 2.2 --- Lossless aggregation of VBR streams --- p.14 / Chapter 3 --- Minimization of Buffer Size --- p.17 / Chapter 3.1 --- A theoretical approach ´ؤ Dynamic programming --- p.19 / Chapter 3.2 --- A practical heuristic ´ؤ Backward Equalization --- p.21 / Chapter 3.3 --- Simulation results of the heuristic method --- p.24 / Chapter 4 --- Bit-rate allocation with fixed buffer --- p.28 / Chapter 4.1 --- Problem formulation --- p.28 / Chapter 4.2 --- Different bit-rate scheduling methods --- p.33 / Chapter 4.3 --- Speed up using point sampling technique --- p.39 / Chapter 4.4 --- Simulation results --- p.44 / Chapter 5 --- Call Admission and Interactive Control for Video Aggregation --- p.50 / Chapter 5.1 --- Call admission issues --- p.50 / Chapter 5.2 --- Interactive Control --- p.53 / Chapter 5.3 --- CBR and ABR hybrid --- p.54 / Chapter 5.4 --- Simulation results --- p.55 / Chapter 6 --- Conclusions and Future research --- p.57 / Chapter 6.1 --- Future Research Suggestions --- p.58 / Chapter 6.2 --- Publications --- p.60 / Bibliography --- p.62
193

Low-Complexity Mode Selection for Rate-Distortion Optimal Video Coding

Kim, Hyungjoon 06 April 2007 (has links)
The primary objective of this thesis is to provide a low-complexity rate-distortion optimal coding mode selection method in digital video encoding. To achieve optimal compression efficiency in the rate-distortion framework with low computational complexity, we first propose a rate-distortion model and then apply it to the coding mode selection problem. The computational complexity of the proposed method is very low compared to overall encoder complexity because the proposed method uses simple image properties such as variance that can be obtained easily. Also, the proposed method gives significant PSNR gains over the mode selection scheme used in TM5 for MPEG-2 because the rate-distortion model considers rate constraints of each mode as well as distortion. We extend the model-based mode selection approach to motion vector selection for further improvement of the coding efficiency. In addition to our theoretical work, we present practical solutions to real-time implementation of encoder modules including our proposed mode selection method on digital signal processors. First, we investigate the features provided by most of the recent digital signal processors, for example, hierarchical memory structure and efficient data transfer between on-chip and off-chip memory, and then present practical approaches for real-time implementation of a video encoder system with efficient use of the features.
194

Direct Information Exchange in Wireless Networks: A Coding Perspective

Ozgul, Damla 2010 August 1900 (has links)
The rise in the popularity of smartphones such as Blackberry and iPhone creates a strain on the world's mobile networks. The extensive use of these mobile devices leads to increasing congestion and higher rate of node failures. This increasing demand of mobile wireless clients forces network providers to upgrade their wireless networks with more efficient and more reliable services to meet the demands of their customers. Therefore, there is a growing interest in strategies to resolve the problem and reduce the stress on the wireless networks. One strategy to reduce the strain on the wireless networks is to utilize cooperative communication. The purpose of this thesis is to provide more efficient and reliable solutions for direct information exchange problems. First, algorithms are presented to increase the efficiency of cooperative communication in a network where the clients can communicate with each other through a broadcast channel. These algorithms are designed to minimize the total transmission cost so that the communication will be less expensive and more efficient. Second, we consider a setting in which several clients exchange data through a relay. Our algorithms have provable performance guarantees. We also verify the performance of the algorithms in practical settings through extensive simulations.
195

Robust Transmission Of 3d Models

Bici, Mehmet Oguz 01 November 2010 (has links) (PDF)
In this thesis, robust transmission of 3D models represented by static or time consistent animated meshes is studied from the aspects of scalable coding, multiple description coding (MDC) and error resilient coding. First, three methods for MDC of static meshes are proposed which are based on multiple description scalar quantization, partitioning wavelet trees and optimal protection of scalable bitstream by forward error correction (FEC) respectively. For each method, optimizations and tools to decrease complexity are presented. The FEC based MDC method is also extended as a method for packet loss resilient transmission followed by in-depth analysis of performance comparison with state of the art techniques, which pointed significant improvement. Next, three methods for MDC of animated meshes are proposed which are based on layer duplication and partitioning of the set of vertices of a scalable coded animated mesh by spatial or temporal subsampling where each set is encoded separately to generate independently decodable bitstreams. The proposed MDC methods can achieve varying redundancy allocations by including a number of encoded spatial or temporal layers from the other description. The algorithms are evaluated with redundancy-rate-distortion curves and per-frame reconstruction analysis. Then for layered predictive compression of animated meshes, three novel prediction structures are proposed and integrated into a state of the art layered predictive coder. The proposed structures are based on weighted spatial/temporal prediction and angular relations of triangles between current and previous frames. The experimental results show that compared to state of the art scalable predictive coder, up to 30% bitrate reductions can be achieved with the combination of proposed prediction schemes depending on the content and quantization level. Finally, optimal quality scalability support is proposed for the state of the art scalable predictive animated mesh coding structure, which only supports resolution scalability. Two methods based on arranging the bitplane order with respect to encoding or decoding order are proposed together with a novel trellis based optimization framework. Possible simplifications are provided to achieve tradeoff between compression performance and complexity. Experimental results show that the optimization framework achieves quality scalability with significantly better compression performance than state of the art without optimization.
196

Source-channel coding for wireless networks

Wernersson, Niklas January 2006 (has links)
<p>The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source–channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis two particular areas in joint source–channel coding are studied: multiple description coding (MDC) and soft decoding. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al is studied. A modification of the quantization in this method is proposed which yields a performance gain. A well known result in joint source–channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied.</p>
197

Network coding for WDM all-optical networks

Manley, Eric D. January 2009 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2009. / Title from title screen (site viewed October 15, 2009). PDF text: xx, 160 p. : ill. (some col.) ; 1 Mb. UMI publication number: AAT 3360160. Includes bibliographical references. Also available in microfilm and microfiche formats.
198

Superposition coded modulation /

Tong, Jun. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Electronic Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [142]-152)
199

Non-uniform filter banks and context modeling for image coding /

Ho, Man-wing. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 90-94).
200

Scan test data compression using alternate Huffman coding

Baltaji, Najad Borhan 13 August 2012 (has links)
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-­‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data. / text

Page generated in 0.0288 seconds