• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 181
  • 56
  • 9
  • 9
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 89
  • 82
  • 80
  • 72
  • 47
  • 46
  • 43
  • 41
  • 41
  • 37
  • 36
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Algorithm and hardware based architectural design targeting the intra-frame prediction of the HEVC video coding standard / Algorithm and hardware based architectural design targeting the intra-frame prediction of the HEVC video coding standard

Palomino, Daniel Munari Vilchez January 2013 (has links)
Este trabalho apresenta uma arquitetura de hardware para a predição intra-quadro do padrão emergente HEVC de codificação de vídeo. O padrão HEVC está sendo desenvolvido tendo como principal objetivo o aumento em 50% na eficiência de compressão, quando comparado com o padrão H.264/AVC, atual padrão estado da arte na codificação de vídeos. Para atingir este objetivo, várias novas ferramentas de codificação foram desenvolvidas para serem introduzidas no novo padrão HEVC. Embora essas novas ferramentas tenham obtido êxito em aumentar a eficiência de compressão do novo padrão HEVC, elas também colaboraram para o aumento da complexidade computacional no processo de codificação. Analisando somente os avanços na predição intra-quadro, em comparação com o padrão H.264/AVC, é possível perceber que vários novos modos direcionais de codificação foram inseridos no processo de predição. Além disso, existem mais tamanhos de blocos que podem ser considerados pela predição intra-quadro. Nesse contexto, este trabalho propõe o uso de duas abordagens para melhorar o desempenho da predição intra-quadro em codificadores HEVC. Primeiramente, foram desenvolvidos algoritmos rápidos de decisão de modo, baseados em heurísticas, para a predição intra-quadro. Os resultados mostraram que é possível reduzir a complexidade computacional do processo de predição intra-quadro com pequenas perdas na eficiência de compressão (taxa de bits e qualidade visual). No pior caso, a perda foi de 6.9% na taxa de bits e de 0.12dB na qualidade, para uma redução de 35% no tempo de processamento. Em seguida, utilizando um dos algoritmos desenvolvidos, uma arquitetura de hardware para a predição intra-quadro foi desenvolvida. Além da redução de complexidade proporcionada pelo uso do algoritmo desenvolvido, técnicas de desenvolvido de hardware, tais como aumento no nível de paralelismo e uso de pipeline, também foram utilizadas para melhorar o desempenho da arquitetura desenvolvida. Os resultados de síntese da arquitetura para a tecnologia IBM 0,65um mostram que ela é capaz de operar a 500MHz, atingindo uma taxa de processamento suficiente para realizar a predição intra-quadro de mais de 30 quadros por segundo para resoluções como Full HD (1920x1080pixels). / This work presents an intra-frame prediction hardware architecture targeting the emerging HEVC video coding standard. The HEVC standard is being developed with the main goal of increase the compression efficiency in 50% when compared to the latest H.264/AVC video coding standard. To achieve such a goal, several new video coding strategies were developed to be used in the HEVC. Although these strategies have increased the compression efficiency of the emerging HEVC standard, it also increased the computational complexity of the encoding process. Looking only to the intra prediction process, several new directional modes are used to perform the prediction. Besides, there are more block sizes that can be supported by the intra prediction process. This work proposes to use two different approaches to improve the HEVC intra prediction performance. First we developed fast intra mode decision algorithms, showing that it is possible to decrease the intra prediction computational complexity with negligible loss in the compression performance (bit-rate and video quality). In the worst case, the bit-rate loss was 6.99% and the PSNR loss was 0.12dB in average allowing reducing the encoding time up to 35%. Then, using the developed fast algorithms as base, this work proposes an intra prediction hardware architecture. The designed architecture was specifically based on one of the developed fast intra mode decision algorithms. Besides, hardware techniques such as increase the parallelism level and pipeline were also used to improve the intra prediction performance. The synthesis results for the IBM 0.65nm have shown that the architecture is able to achieve 500MHz as maximum operation frequency. This way, the architecture throughput is enough to perform the intra prediction process for more than 30 frames per second considering high resolution digital videos, such as Full HD (1920x1080).
212

Arquitetura de hardware dedicada para a predição intra-quadro em codificadores do padrão H.264/AVC de compressão de vídeo / Intra-frame prediction dedicated hardware architecture for encoders of the H.264/AVC video coding standard

Diniz, Claudio Machado January 2009 (has links)
A compressão de vídeo é essencial para aplicações de vídeo digital. Devido ao elevado volume de informações contidas em um vídeo digital, um processo de compressão é aplicado antes de ser armazenado ou transmitido. O padrão H.264/AVC é considerado o estado-da-arte em termos de compressão de vídeo, introduzindo um conjunto de ferramentas inovadoras em relação a padrões anteriores. Tais ferramentas possibilitam um ganho significativo em compressão, ao preço de um aumento na complexidade. A predição intra-quadro é uma das ferramentas inovadoras do padrão H.264/AVC, responsável por reduzir a redundância espacial do vídeo utilizando informações contidas em um único quadro para predição. A predição intra-quadro do H.264/AVC possibilita ganhos de compressão em comparação com os mais usados padrões de compressão de imagens estáticas, o JPEG e JPEG 2000, mas introduz complexidade no projeto do codificador de vídeo, especialmente quando se torna necessário atingir o desempenho para codificar vídeos de alta definição em tempo-real. Neste contexto, a presente dissertação apresenta a proposta e o desenvolvimento de uma arquitetura de hardware dedicada para a predição intra-quadro, presente nos codificadores compatíveis com o padrão H.264/AVC de compressão de vídeo. A arquitetura desenvolvida codifica vídeos de alta definição em tempo-real utilizando uma frequência de operação 46% menor que o melhor trabalho encontrado na literatura. A arquitetura desenvolvida será integrada, futuramente, em um codificador de vídeo em hardware compatível com o padrão H.264/AVC no perfil Main. / Video coding is essential in digital video applications, due to the extremely high data volume present in a digital video to be stored or transmitted through a physical link. H.264/AVC is the state-of-the-art video coding standard, introducing a set of novel features when compared to former standards. A significant gain in terms of bit-rate has been obtained but the increase of complexity of the codec when compared to other video coding standard is inevitable. Intra-frame Prediction is a novel feature introduced with H.264/AVC, which is responsible for reducing a video spatial redundancy using only information in the same frame for prediction. H.264/AVC intra-frame prediction can provide compression gains when compared with state-of-art still image coding standards, like JPEG and JPEG 2000, but introduces complexity and latency to video encoder design, mainly when high definition video coding is needed. In this context, this thesis presents the proposal and development of an intra-frame prediction dedicated hardware architecture for H.264/AVC compatible video encoder. The developed architecture achieved the performance to encode high definition video in real-time with 46% reduction in clock frequency compared with the best results found in the literature. In the future, the developed architecture can be integrated to a fully compatible H.264/AVC main profile hardware encoder.
213

Avaliação subjetiva de qualidade aplicada à codificação de vídeo escalável / Subjective video quality assessment applied to scalable video coding

Daronco, Leonardo Crauss January 2009 (has links)
Os constantes avanços nas áreas de transmissão e processamento de dados ao longo dos últimos anos permitiram a criação de diversas aplicações e serviços baseados em dados multimídia, como streaming de vídeo, videoconferências, aulas remotas e IPTV. Além disso, avanços nas demais áreas da computação e engenharias, possibilitaram a construção de uma enorme diversidade de dispositivos de acesso a esses serviços, desde computadores pessoais até celulares, para citar os mais utilizados atualmente. Muitas dessas aplicações e dispositivos estão amplamente difundidos hoje em dia, e, ao mesmo tempo em que a tecnologia avança, os usuários tornam-se mais exigentes, buscando sempre melhor qualidade nos serviços que utilizam. Devido à grande variedade de redes e dispositivos atuais, uma dificuldade existente é possibilitar o acesso universal a uma transmissão. Uma alternativa criada é utilizar transmissão de vídeo escalável com IP multicast e controlada por mecanismos para adaptabilidade e controle de congestionamento. O produto final dessas transmissões mulimídia são os próprios dados multimídia (vídeo e áudio, principalmente) que o usuário está recebendo, portanto a qualidade destes dados é fundamental para um bom desempenho do sistema e satisfação dos usuários. Este trabalho apresenta um estudo de avaliações subjetivas de qualidade aplicadas em sequências de vídeo codificadas através da extensão escalável do padrão H.264 (SVC). Foi executado um conjunto de testes para avaliar, principalmente, os efeitos da instabilidade da transmissão (variação do número de camadas de vídeo recebidas) e a influência dos três métodos de escalabilidade (espacial, temporal e de qualidade) na qualidade dos vídeos. As definições foram baseadas em um sistema de transmissão em camadas com utilização de protocolos para adaptabilidade e controle de congestionamento. Para execução das avaliações subjetivas foi feito o uso da metodologia ACR-HRR e recomendações das normas ITU-R Rec. BT.500 e ITU-T Rec. P.910. Os resultados mostram que, diferente do esperado, a instabilidade não provoca grandes alterações na qualidade subjetiva dos vídeos e que o método de escalabilidade temporal tende a apresentar qualidade bastante inferior aos outros métodos. As principais contribuições deste trabalho estão nos resultados obtidos nas avaliações, além da metodologia utilizada durante o desenvolvimento do trabalho (definição do plano de avaliação, uso das ferramentas como o JSVM, seleção do material de teste, execução das avaliações, entre outros), das aplicações desenvolvidas, da definição de alguns trabalhos futuros e de possíveis objetivos para avaliações de qualidade. / The constant advances in multimedia processing and transmission over the past years have enabled the creation of several applications and services based on multimedia data, such as video streaming, teleconference, remote classes and IPTV. Futhermore, a big variety of devices, that goes from personal computers to mobile phones, are now capable of receiving these transmissions and displaying the multimedia data. Most of these applications are widely adopted nowadays and, at the same time the technology advances, the user are becoming more demanding about the quality of the services they use. Given the diversity of devices and networks available today, one of the big challenges of these multimedia systems is to be able to adapt the transmission to the receivers' characteristics and conditions. A suitable solution to provide this adaptation is the integration of scalable video coding with layered transmission. As the final product in these multimedia systems are the multimedia data that is presented to the user, the quality of these data will define the performace of the system and the users' satisfaction. This paper presents a study of subjective quality of scalable video sequences, coded using the scalable extension of the H.264 standard (SVC). A group of experiments was performed to measure, primarily, the efeects that the transmission instability (variations in the number of video layers received) has in the video quality and the relationship between the three scalability methods (spatial, temporal and quality) in terms of subjective quality. The decisions taken to model the tests were based on layered transmission systems that use protocols for adaptability and congestion control. To run the subjective assessments we used the ACR-HRR methodology and recommendations given by ITU-R Rec. BT.500 and ITU-T Rec. P.910. The results show that the instability modelled does not causes significant alterations on the overall video subjective quality if compared to a stable video and that the temporal scalability usually produces videos with worse quality than the spatial and quality methods, the latter being the one with the better quality. The main contributions presented in this work are the results obtained in the subjective assessments. Moreover, are also considered as contributions the methodology used throughout the entire work (including the test plan definition, the use of tools as JSVM, the test material selection and the steps taken during the assessment), some applications that were developed, the definition of future works and the specification of some problems that can also be solved with subjective quality evaluations.
214

Sistema de codificão de video baseado em transformadas tridimensionais, rapidas e progressivas / Video coding system based on three dimensional, fast and progressive transforms

Testoni, Vanessa 02 September 2007 (has links)
Orientadores: Max Henrique Machado Costa, Leonardo de Souza Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T14:35:04Z (GMT). No. of bitstreams: 1 Testoni_Vanessa_M.pdf: 4002014 bytes, checksum: 340fce287ea15c5d681c3f317544e1ab (MD5) Previous issue date: 2007 / Resumo: As pesquisas na área de codificação de vídeo buscam técnicas que alcancem taxas de compressão cada vez mais altas. O aumento da compressão é obtido ao custo do aumento da complexidade dos algoritmos de codificação, que é suportado pelo também constante aumento da capacidade dos processadores. Entretanto, em alguns cenários de codificação e transmissão de vídeo, a utilização destes processadores de alta capacidade não é possível ou desejada. Isso exige o desenvolvimento de codificadores de vídeo focados na obtenção de tempos de processamento reduzido e na utilização de poucos recursos computacionais, tais como o sistema de codificação apresentado neste trabalho. Para o desenvolvimento deste sistema foi utilizada a transformada de Hadamard tridimensional implementada de forma otimizada e um codificador adaptativo de Golomb por planos de bits que acrescenta ao sistema a desejável característica de ser progressivo. A implementação do sistema é adaptada para realizar somente operações matemáticas rápidas e alocar pouca memória computacional. Mesmo com a utilização destas técnicas focadas em rapidez, foram obtidos bons resultados experimentais em termos da razão de sinal de pico por ruído em função da taxa de bits por pixel / Abstract: The research on video coding systems has always been looking for techniques that can reach the highest possible compression rate. This compression rate increase is generally achieved by means of increased coding complexity, which is supported by the continuous increase verified in computational power. However, in some video coding and transmission situations, the use of high capacity processors is not possible or desirable. These situations require the development of video coders focused on the achievement of reduced execution times and on the requirement of few computational resources, just as the video coding system proposed in this dissertation. The proposed system uses three dimensional Hadamard transforms, implemented in an efficient way, and adaptive entropy coding with Golomb codes applied to bit planes, whichs adds to the system the desirable characteristic of being progressive. The computational system implementation is designed to perform only fast mathematical operations and to require small computational memory. Even with the use of these constrained techniques, good experimental results, in terms of peak signal to noise ratio (PSNR) versus pixel bit-rate were achieved. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
215

Transcoding H.265/HEVC / Transcoding H.265/HEVC

Tamanna, Sina January 2013 (has links)
Video transcoding is the process of converting compressed video signals to adapt video characteristics such as video bit rate, video resolution, or video codec, so as to meet the specifications of communication channels and endpoint devices. A straightforward transcoding solution is to fully decode and encode the video. However this method is computationally expensive and thus unsuitable in applications with tight resource constraints such as in software-based real-time environment. Therefore, efficient transcoding meth- ods are required to reduce the transcoding complexity while preserving video quality. Prior transcoding methods are suitable for video coding standards such as H.264/AVC and MPEG-2. H.265/HEVC has introduced new coding concepts, e.g., the quad-tree-based block structure, that are fundamentally different from those in prior standards. These concepts require existing transcoding methods to be adapted and novel solutions to be developed. This work primarily addressed the issue of efficient HEVC transcoding for bit rate adaptation (reduction). The goal is to understand the transcoding behaviour for some straightforward transcoding strategies, and to subsequently optimize the complexity/quality trade-off by providing heuristics to reduce the number of coding options to evaluate. A transcoder prototype is developed based on the HEVC reference software HM-8.2. The proposed transcoder reduces the transcoding time compared to full decoding and encoding by at least 80% while inducing a coding performance drop within a margin for 5%. The thesis has been carried out in collaboration with Ericsson Research in Stockholm / Video content is produced daily through variety of electronic devices, however, storing and transmitting video signals in raw format is impractical due to its excessive resource requirement. Today popular video coding standards such as MPEG-4 and H.264 are used to compress the video signals before storing and transmitting. Accordingly, efficient video coding plays an important role in video communications. While video applications become wide-spread, there is a need for high compression and low complexity video coding algorithms that preserve image quality. Standard organizations ISO, ITO, VCEG of ITU-T, and collaboration of many companies have developed video coding standards in the past to meet video coding requirements of the day. The Advanced Video Coding (AVC/H.264) standard is the most widely used video coding method. AVC is commonly known to be one of the major standards used in Blue Ray devices for video compression. It is also widely used by video streaming services, TV broadcasting, and video conferencing applications. Currently the most important development in this area is the introduction of H.265/HEVC standard which has been finalized in January 2013. The aim of standardization is to produce video compression specification that is capable of compression twice as effective as H.264/AVC standard in terms of coding complexity and quality. There is a wide range of platforms that receive digital video. TVs, personal computers, mobile phones, and tablets each have different computational, display, and connectivity capabilities, thus video has to be converted to meet the specifications of target platform. This conversion is achieved through video transcoding. For transcoding, straightforward solution is to decode the compressed video signal and re-encode it to the target compression format, but this process is computationally complex. Particularly in real-time applications, there is a need to exploit the information that is already available through the compressed video bit-stream to speed-up the conversion. The objective of this thesis is to investigate efficient transcoding methods for HEVC. Using decode/re-encode as the performance reference, methods for advanced transcoding will be investigated. / 0760609667 Bäckgårdsvägen 49, 14341 Stockholm
216

H.264 Baseline Real-time High Definition Encoder on CELL

Wei, Zhengzhe January 2010 (has links)
In this thesis a H.264 baseline high definition encoder is implemented on CELL processor. The target video sequence is YUV420 1080p at 30 frames per second in our encoder. To meet real-time requirements, a system architecture which reduces DMA requests is designed for large memory accessing. Several key computing kernels: Intra frame encoding, motion estimation searching and entropy coding are designed and ported to CELL processor units. A main challenge is to find a good tradeoff between DMA latency and processing time. The limited 256K bytes on-chip memory of SPE has to be organized efficiently in SIMD way. CAVLC is performed in non-real-time on the PPE.   The experimental results show that our encoder is able to encode I frame in high quality and encode common 1080p video sequences in real-time. With the using of five SPEs and 63KB executable code size, 20.72M cycles are needed to encode one P frame partitions for one SPE. The average PSNR of P frames increases a maximum of 1.52%. In the case of fast speed video sequence, 64x64 search range gets better frame qualities than 16x16 search range and increases only less than two times computing cycles of 16x16. Our results also demonstrate that more potential power of the CELL processor can be utilized in multimedia computing.   The H.264 main profile will be implemented in future phases of this encoder project. Since the platform we use is IBM Full-System Simulator, DMA performance in a real CELL processor is an interesting issue. Real-time entropy coding is another challenge to CELL.
217

Light Field Coding Using Panoramic Projection

Axelsson, Arvid January 2014 (has links)
A new generation of 3d displays provides depth perception without the need for glasses and allows the viewer to see content from many different directions. Providing video for these displays requires capturing the scene by several cameras at different viewpoints, the data from which together forms light field video. To encode such video with existing video coding requires a large amount of data and it increases quickly with a higher number of views, which this application needs. One such coding is the multiview extension of High Efficiency Video Coding (mv-hevc), which encodes a number of similar video streams as different layers. A new coding scheme for light field video, called Panoramic Light Field (plf), is implemented and evaluated in this thesis. The main idea behind the coding is to project all points in a scene that are visible from any of the viewpoints to a single, global view, similar to how texture mapping maps a texture onto a 3d model in computer graphics. Whereas objects ordinarily shift position in the frame as the camera position changes, this is not the case when using this projection. A visible point in space is projected to the same image pixel regardless of viewpoint, resulting in large similarities between images from different viewpoints. The similarity between the layers in light field video helps to achieve more efficient compression when the projection is combined with existing multiview coding. In order to evaluate the scheme, 3d content was created and software was developed to encode it using plf. Video using this coding is compared to existing technology: a straightforward encoding of the views using mvhevc. The results show that the plf coding performs better on the sample content at lower quality levels, while it is worse at higher bitrate due to quality loss from the projection procedure. It is concluded that plf is a promising technology and suggestions are given for future research that may improve its performance further. / Nya tekniker är under utveckling för 3D-bildskärmar kan visa light field: bilder och video som spelas in med arrayer av kameror. Sådan video kräver stora datamängder. En ny kodning av light field, syftande till att uppnå ett bättre förhållande mellan bildkvalitet och bitrate, utvärderas i det här examensarbetet.
218

Rate-distortion based video coding with adaptive mean-removed vector quantization

Hamzaoui, Raouf, Saupe, Dietmar, Wagner, Marcel 01 February 2019 (has links)
In this paper we improve the rate-distortion performance of a previously proposed video coder based on frame replenishment and adaptive mean-removed vector quantization. This is realized by determining for each block of a given frame the optimal encoding mode in the rate-distortion sense. The algorithm is a new contribution to very low bit rate video coding with adaptive vector quantization suitable for videophone applications. Experimental results comparing the two coders for several test sequences at different bit rates are provided.
219

[pt] ABORDAGENS MATEMÁTICAS E EXPERIMENTAIS EM CODIFICAÇÃO DE VÍDEO ADAPTATIVA À FORMA / [en] MATHEMATICAL AND EXPERIMENTAL APPROACHES IN SHAPE-ADAPTATIVE VIDEO CODING

EMILIO CARLOS ACOCELLA 07 December 2005 (has links)
[pt] Esta tese aborda teórica e experimentalmente diversos tópicos de Codificação Adaptativa à Forma de objetos de forma arbitrária. Aspectos associados à representação e à codificação eficiente da intensidade e do contorno de objetos são analisados e são propostas soluções para os problemas identificados. Os métodos introduzidos são testados valendo-se de seqüências de imagens empregadas em trabalhos congêneres. Inicialmente é desenvolvida uma formulação matemática das transformadas adaptativas à forma utilizando operadores lineares e, com base nela, é obtida uma métrica que possibilita a avaliação teórica do desempenho dessas transformadas. A comparação das grandezas obtidas com resultados de experimentos mostram a validade dessa métrica para a finalidade visada. Em seguida é analisada a questão do melhor alinhamento dos coeficientes das transformadas unidimensionais de duas colunas com dimensões distintas e é proposto um método de alinhamento pela fase. Esse método caracteriza-se pela baixa complexidade e os resultados experimentais demonstram o seu desempenho superior ao de outros encontrados na literatura. Problemas específicos da codificação adaptativa à forma referentes à quantização dos coeficientes da transformada empregada são abordados matematicamente para diversas e freqüentes versões de sua implementação. Apresenta-se um método para solucionar simultaneamente os problemas da distorção do valor médio e da correlação do erro do sinal introduzido pela quantização. Constata-se experimentalmente sua maior eficiência de codificação em relação à de outros métodos propostos em trabalhos recentes. Um grande número de possíveis modificações de um codificador de cadeia diferencial, método bastante empregado para a codificação de contorno sem perda, é identificado e avaliado, concluindo-se com a implementação de um método que introduz aquelas mudanças que resultaram em aumento significativo da eficiência de codificação da forma de objetos. Por fim, propõe-se um esquema genérico de decomposição em subbandas através de uma transformada wavelet discreta adaptativa à forma. Os resultados dos experimentos realizados permitem concluir que o esquema oferece perspectivas de obtenção de eficiência de codificação superior à da transformada cosseno discreta adaptativa à forma, sobreturde em baixas taxas de bits por pixel. / [en] This thesis investigates shape adaptative coding of arbitrarily shaped segments. The texture and contour coding efficiency is discussed and solutions to tackle the associated problems are proposed. The presented methods are evaluated using standard image sequences. A mathematical approach for shape-adaptative transforms using linear operators is developed, followed by a metric that theoretically evaluates the transform performances. Experimental results show that the proposed metric is an efficient tool for such purposes. The proper way for grouping the 1-D transform coefficients of two image segments of different sizes is analyzed. Based on this analysis, a new low complexity method for grouping the coefficients is proposed. A better performance than other reported methods in the literature is attested by the experimental results. A mathematical analysis of the performance limitations of shape-adaptative transforms due to coefficients quantization is presented. The drawbacks discussed are the mean weighting distortion and the signal error correlation produced by the quantization process. An efficient method to simultaneously overcome both problems is proposed. The differential chain coder is an efficient and frequently employed structure for lossless encoding of object boundaries. Many modifications in the differential chain coders are investigated and evaluated, resulting in a method that reduces the bit rate to encode the object shape. Finally, a generic scheme for sub-band decomposition using shape-adaptative discrete wavelet transform is proposed. The experimental results show that such a scheme is able to provide a performance gain over the shape-adptative discrete cosine transform at low bit rates. The preliminary results suggest that this scheme could be a promising new approach for shape adaptative video coding.
220

Reconfigurable Computing For Video Coding

Huang, Jian 01 January 2010 (has links)
Video coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture is a Wavefront Array-based Processor with a highly modular structure consisting of 8*8 Processing Elements (PEs). By utilizing statistical properties and arithmetic operations, it can be used as a high performance hardware accelerator for video transcoding applications. We show how different core algorithms can be mapped onto the same hardware fabric and can be executed through the pre-defined PEs. In addition to the simplified design process of the proposed architecture and savings of the hardware resources, we also demonstrate that high throughput rate can be achieved for IDCT and DCT-MC by fully utilizing the sparseness property of DCT coefficient matrix. Compared to fixed hardware architecture using ASIC design approach, reconfigurable hardware design approach has higher flexibility, lower cost, and faster time-to-market. We propose a self-reconfigurable platform which can reconfigure the architecture of DCT computations during run-time using dynamic partial reconfiguration. The scalable architecture for DCT computations can compute different number of DCT coefficients in the zig-zag scan order to adapt to different requirements, such as power consumption, hardware resource, and performance. We propose a configuration manager which is implemented in the embedded processor in order to adaptively control the reconfiguration of scalable DCT architecture during run-time. In addition, we use LZSS algorithm for compression of the partial bitstreams and on-chip BlockRAM as a cache to reduce latency overhead for loading the partial bitstreams from the off-chip memory for run-time reconfiguration. A hardware module is designed for parallel reconfiguration of the partial bitstreams. The experimental results show that our approach can reduce the external memory accesses by 69% and can achieve 400 MBytes/s reconfiguration rate. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration. Prediction algorithm of zero quantized DCT (ZQDCT) to control the run-time reconfiguration of the proposed scalable architecture has been used, and 12 different modes of DCT computations including zonal coding, multi-block processing, and parallel-sequential stage modes are supported to reduce power consumptions, required hardware resources, and computation time with a small quality degradation. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration to meet the requirements set by the users.

Page generated in 0.0708 seconds