121 |
General Purpose Computing in Gpu - a Watermarking Case StudyHanson, Anthony 08 1900 (has links)
The purpose of this project is to explore the GPU for general purpose computing. The GPU is a massively parallel computing device that has a high-throughput, exhibits high arithmetic intensity, has a large market presence, and with the increasing computation power being added to it each year through innovations, the GPU is a perfect candidate to complement the CPU in performing computations. The GPU follows the single instruction multiple data (SIMD) model for applying operations on its data. This model allows the GPU to be very useful for assisting the CPU in performing computations on data that is highly parallel in nature. The compute unified device architecture (CUDA) is a parallel computing and programming platform for NVIDIA GPUs. The main focus of this project is to show the power, speed, and performance of a CUDA-enabled GPU for digital video watermark insertion in the H.264 video compression domain. Digital video watermarking in general is a highly computationally intensive process that is strongly dependent on the video compression format in place. The H.264/MPEG-4 AVC video compression format has high compression efficiency at the expense of having high computational complexity and leaving little room for an imperceptible watermark to be inserted. Employing a human visual model to limit distortion and degradation of visual quality introduced by the watermark is a good choice for designing a video watermarking algorithm though this does introduce more computational complexity to the algorithm. Research is being conducted into how the CPU-GPU execution of the digital watermark application can boost the speed of the applications several times compared to running the application on a standalone CPU using NVIDIA visual profiler to optimize the application.
|
122 |
Um sistema de codificação de vídeo para TV digital – SBTVDLinck, Iris Correa das Chagas 29 June 2012 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-07-03T17:52:39Z
No. of bitstreams: 1
Iris Corrêa das Chagas Linck.pdf: 1456080 bytes, checksum: ea4a6f659a229e845649c58baaf8cb23 (MD5) / Made available in DSpace on 2015-07-03T17:52:39Z (GMT). No. of bitstreams: 1
Iris Corrêa das Chagas Linck.pdf: 1456080 bytes, checksum: ea4a6f659a229e845649c58baaf8cb23 (MD5)
Previous issue date: 2012 / FINEP - Financiadora de Estudos e Projetos / Neste trabalho é desenvolvido um algoritmo híbrido que simula o comportamento do Codificador/Decodificador de vídeo H.264/AVC, ou simplesmente CODEC H.264, utilizado no Sistema Brasileiro de Televisão Digital. O algoritmo proposto tem a finalidade de buscar a melhor configuração possível de seis dos principais parâmetros utilizados para a configuração do CODEC H.264. Este problema é abordado como um problema de otimização combinatória conhecido como Problema de Seleção de Partes e que é classificado como NP-Difícil. O algoritmo híbrido proposto, denominado Simulador de Metaheurísticas aplicado a um CODEC (SMC), foi desenvolvido com base em duas metaheurísticas: Busca Tabu e Algoritmo Genético. Os seis parâmetros de configuração a serem otimizados pelo SMC são: o bit rate; o frame rate; os parâmetros de quantização de quadros tipo B, tipo P e tipo I e a quantidade de quadros tipo B em um grupo de imagens (GOP – Group of Pictures). Os dois primeiros parâmetros mencionados atuam basicamente sobre a qualidade da imagem do vídeo enquanto que os demais parâmetros atuam diretamente na compressão do vídeo. Experimentos e testes foram feitos utilizandose o CODEC H.264 desenvolvido no Projeto Plataforma de Convergência Digital IPTV/TV Digital (DigConv). Nos experimentos o CODEC tem seus parâmetros configurados de acordo com os resultados obtidos pelo SMC. Um vídeo é codificado no CODEC H.264 para que se possa analisar a sua qualidade de imagem e o seu grau de compressão após o processo de codificação. É feita uma correlação entre esses resultados e a Função Objetivo do SMC. A qualidade da imagem é medida através da métrica mais utilizada na literatura, o PSNR (Peak Signal to Noise Ratio), que é calculada pelo próprio CODEC ao final da codificação de um vídeo. Verificouse que à medida que a Função Objetivo aumenta, o CODEC H.264 consegue obter uma melhor qualidade de imagem e um maior grau de compressão de vídeo. / In this work is developed a hybrid algorithm that simulates the behavior of the H.264/AVC video encoder/decoder, or simply H.264 video CODEC, used in the Brazilian System of Digital Television. The proposed algorithm intends to seek the best possible configuration of the six main parameters used for configuring the H.264 video CODEC. This problem is treated as a combinatorial optimization problem known as the Parties Selection Problem, which is classified as NP-Hard. The proposed hybrid algorithm, called Simulator Metaheuristcs applied to a CODEC (SMC), was developed based on two metaheuristics: Tabu Search and Genetic Algorithm. The six configuration parameters to be optimized by the SMC are the bit rate, frame rate, the parameters of quantization tables of type B, type I and type P and the amount of frames type B in a group of pictures (GOP - Group of Pictures).The first two parameters mentioned, work primarily on the quality of the video image while the other parameters act directly on the video compression. Experiments and tests were done using the video CODEC H.264 developed in Digital Convergence Platform IPTV/Digital TV Project (DigConv). DigConv Project. In the experiments the CODEC has its parameters set according to the results obtained by the SMC. Then, a video is encoded by the CODEC in order to analyze the video image quality and the video compression degree reached after the encoding process. It is made a correlation between these results and the objective function of the SMC. The picture quality is measured by the metric most often used in literature, the PSNR (Peak Signal to Noise Ratio), which is calculated by the CODEC at the end of a video encoding process. It was found that as the objective function has increased, the CODEC reached a better image quality and a higher video compression.
|
123 |
Joint source video coding : joint rate control for H.264/AVC video codingTeixeira, Luís Miguel Lopes January 2012 (has links)
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 2012
|
124 |
Optimisation conjointe source/canal d'une transmission vidéo H.264/AVC sur un lien sans filBergeron, Cyril 24 January 2007 (has links) (PDF)
Dans le domaine des transmissions de données multimédia, de remarquables progrès ont été fait au cours des vingt dernières années permettant d'optimiser chaque module d'une chaîne de communication moderne. Mais en dépit de ces excellents résultats, une approche cloisonnée ou "séparée" a montré ses limites dans le cas des communications sans fil. Notre approche, qui suit celle du codage source/canal conjoint, a pour objectif de développer des stratégies où le codage de source et le codage canal sont déterminés conjointement tout en prenant en compte les paramètres du réseau et d'éventuelles contraintes utilisateurs. Cette approche offre la possibilité de faire converser le monde de l'application (codage source, chiffrement) et le monde des transmissions (codage canal) afin qu'ils optimisent conjointement l'usage du lien de communications sans fil de bout en bout. Trois axes de recherche sont traités dans ce mémoire de thèse qui permettent d'optimiser l'allocation des ressources de l'utilisateur et du réseau appliquée tout en assurant une compatibilité avec la norme de codage vidéo H.264. Tout d'abord, nous proposons d'utiliser la redondance résiduelle présente dans un flux binaire en sortie du codeur source afin d'améliorer les performances du décodage. Ensuite, nous introduisons une méthode proposant des propriétés de scalabilité temporelle compatible du standard H.264. Enfin, nous présentons une méthode d'optimisation conjointe de la répartition de débit entre le codeur de source et le codeur de canal au moyen d'un contrôleur applicatif estimant la distorsion globale introduite par ces différents codeurs grâce au calcul de la sensibilité des flux binaires considérés.
|
125 |
Transmission d'images et de vidéos sur réseaux à pertes de paquets : mécanismes de protection et optimisation de la qualité perçueBoulos, Fadi 12 February 2010 (has links) (PDF)
Le trafic multimédia sur IP connaît une forte croissance ces dernières années grâce à l'émer- gence de services comme la TV sur IP ou la vidéo à la demande (Video on Demand). Cependant, la Qualité d'Usage (QdU) associée à ce type de trafic n'est pas garantie, principalement à cause de la fluctuation de la Qualité de Service (QdS). Pour assurer un service de qualité acceptable, il est possible d'améliorer les paramètres de QdS ou même d'améliorer directement la QdU. Dans cette thèse, nous nous intéressons à l'étude de l'impact perceptuel de la variation de la QdU et à son amélioration. Nous proposons tout d'abord d'utiliser la transformation Mojette, une transformation de Radon discrète exacte, comme opérateur de network coding. Cette technique vise l'amélioration de la QdS en optimisant l'utilisation de la bande passante disponible. Nous proposons également une méthode de protection inégale perceptuelle de flux hiérarchiques par transformation Mojette. Ensuite, nous étudions les effets perceptuels des pertes de paquets sur des vidéos codées en H.264/AVC au travers de tests subjectifs d'évaluation de qualité. Ces tests mènent à l'identification de l'importance de la position spatiale de la perte dans l'image. Nous conduisons alors des expérimentations oculométriques pour identifier les régions d'intérêt de la vidéo. Partant d'une hiérarchie de la source guidée par ces régions d'intérêt, nous proposons des méthodes de protection perceptuelles inégales. Ces techniques de codage robuste, mettant en œuvre l'outil Flexible Macroblock Ordering (FMO) de H.264/AVC, sont fondées sur l'arrêt de la propagation spatio-temporelle des dégradations. L'évaluation de performances montre que les méthodes proposées sont efficaces contre les pertes de paquets ayant lieu dans les régions d'intérêt de la vidéo.
|
126 |
H.264 Baseline Real-time High Definition Encoder on CELLWei, Zhengzhe January 2010 (has links)
<p>In this thesis a H.264 baseline high definition encoder is implemented on CELL processor. The target video sequence is YUV420 1080p at 30 frames per second in our encoder. To meet real-time requirements, a system architecture which reduces DMA requests is designed for large memory accessing. Several key computing kernels: Intra frame encoding, motion estimation searching and entropy coding are designed and ported to CELL processor units. A main challenge is to find a good tradeoff between DMA latency and processing time. The limited 256K bytes on-chip memory of SPE has to be organized efficiently in SIMD way. CAVLC is performed in non-real-time on the PPE.</p><p> </p><p>The experimental results show that our encoder is able to encode I frame in high quality and encode common 1080p video sequences in real-time. With the using of five SPEs and 63KB executable code size, 20.72M cycles are needed to encode one P frame partitions for one SPE. The average PSNR of P frames increases a maximum of 1.52%. In the case of fast speed video sequence, 64x64 search range gets better frame qualities than 16x16 search range and increases only less than two times computing cycles of 16x16. Our results also demonstrate that more potential power of the CELL processor can be utilized in multimedia computing.</p><p> </p><p>The H.264 main profile will be implemented in future phases of this encoder project. Since the platform we use is IBM Full-System Simulator, DMA performance in a real CELL processor is an interesting issue. Real-time entropy coding is another challenge to CELL.</p>
|
127 |
Image coding with H.264 I-frames / Stillbildskodning med H.264 I-framesEklund, Anders January 2007 (has links)
<p>In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence.</p> / <p>I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.</p>
|
128 |
Parallel video decodingÁlvarez Mesa, Mauricio 08 September 2011 (has links)
Digital video is a popular technology used in many different applications. The quality of video, expressed in the spatial and temporal resolution, has been increasing continuously in the last years. In order to reduce the bitrate required for its storage and transmission, a new generation of video encoders and decoders (codecs) have been developed. The latest video codec standard, known as H.264/AVC, includes sophisticated compression tools that require more computing resources than any previous video codec. The combination of high quality video and the advanced compression tools found in H.264/AVC has resulted in a significant increase in the computational requirements of video decoding applications.
The main objective of this thesis is to provide the performance required for real-time operation of high quality video decoding using programmable architectures. Our solution has been the simultaneous exploitation of multiple levels of parallelism. On the one hand, video decoders have been modified in order to extract as much parallelism as possible. And, on the other hand, general purpose architectures has been enhanced for exploiting the type of parallelism that is present in video codec applications. / El vídeo digital es una tecnología popular utilizada en una gran variedad de aplicaciones. La calidad de vídeo, expresada en la resolución espacial y temporal, ha ido aumentando constantemente en los últimos años. Con el fin de reducir la tasa de bits requerida para su almacenamiento y transmisión, se ha desarrollado una nueva generación de codificadores y decodificadores (códecs) de vídeo. El códec estándar de vídeo más reciente, conocido como H.264/AVC, incluye herramientas sofisticadas de compresión que requieren más recursos de computación que los códecs de vídeo anteriores. El efecto combinado del vídeo de alta calidad y las herramientas de compresión avanzada incluidas en el H.264/AVC han llevado a un aumento significativo de los requerimientos computacionales de la decodificación de vídeo.
El objetivo principal de esta tesis es proporcionar el rendimiento necesario para la decodificación en tiempo real de vídeo de alta calidad. Nuestra solución ha sido la explotación simultánea de múltiples niveles de paralelismo. Por un lado, se realizaron modificaciones en el decodificador de vídeo con el fin de extraer múltiples niveles de paralelismo. Y, por otro lado, se modificaron las arquitecturas de propósito general para mejorar la explotación del tipo paralelismo que está presente en las aplicaciones de vídeo.
Primero hicimos un análisis de la escalabilidad de dos extensiones de Instrucción Simple con Múltiples Datos (SIMD por sus siglas en inglés): una de una dimensión (1D) y otra matricial de dos dimensiones (2D). Se demostró que al escalar la extensión 2D se obtiene un mayor rendimiento con una menor complejidad que al escalar la extensión 1D.
Luego se realizó una caracterización de la decodificación de H.264/AVC en aplicaciones de alta definición (HD) donde se identificaron los núcleos principales. Debido a la falta de un punto de referencia (benchmark) adecuado para la decodificación de vídeo HD, desarrollamos uno propio, llamado HD-VideoBench el cual incluye aplicaciones completas de codificación y decodificación de vídeo junto con una serie de secuencias de vídeo en HD.
Después optimizamos los núcleos más importantes del decodificador H.264/AVC usando instrucciones SIMD. Sin embargo, los resultados no alcanzaron el máximo rendimiento posible debido al efecto negativo de la desalineación de los datos en memoria. Como solución, evaluamos el hardware y el software necesarios para realizar accesos no alineados. Este soporte produjo mejoras significativas de rendimiento en la aplicación.
Aparte se realizó una investigación sobre cómo extraer paralelismo de nivel de tarea. Se encontró que ninguno de los mecanismos existentes podía escalar para sistemas masivamente paralelos. Como alternativa, desarrollamos un nuevo algoritmo que fue capaz de encontrar miles de tareas independientes al explotar paralelismo de nivel de macrobloque.
Luego implementamos una versión paralela del decodificador de H.264 en una máquina de memoria compartida distribuida (DSM por sus siglas en inglés). Sin embargo esta implementación no alcanzó el máximo rendimiento posible debido al impacto negativo de las operaciones de sincronización y al efecto del núcleo de decodificación de entropía.
Con el fin de eliminar estos cuellos de botella se evaluó la paralelización al nivel de imagen de la fase de decodificación de entropía combinada con la paralelización al nivel de macrobloque de los demás núcleos. La sobrecarga de las operaciones de sincronización se eliminó casi por completo mediante el uso de operaciones aceleradas por hardware.
Con todas las mejoras presentadas se permitió la decodificación, en tiempo real, de vídeo de alta definición y alta tasa de imágenes por segundo. Como resultado global se creó una solución escalable capaz de usar el número creciente procesadores en las arquitecturas multinúcleo.
|
129 |
Implementation of a Low Cost Reconfigurable Transform Architecture for Multiple Video Codecs2012 June 1900 (has links)
Currently different types of transform techniques are used by different video codecs to achieve data compression during video frame transmission. Among them, Discrete Cosine Transform (DCT) is supported by most of modern video standards. The integer DCT (Int-DCT) is an integer approximation of DCT. It can be implemented exclusively with integer arithmetic. Int-DCT proves to be highly advantageous in cost and speed for hardware implementations. In particular, the 4x4 and 8x8 block size Int-DCTs have the increased applicability at the current multimedia industry because of their simpler implementation and better de-correlation performance for high definition (HD) video signals.
In this thesis, we present a fast and cost-shared reconfigurable architecture to compute variable block size Int-DCT for four modern video codecs – AVS, H.264/AVC, VC-1 and HEVC (under development). Based on the symmetric structure of the transform matrices and the similarity in matrix operations, we have developed a generalized “decompose and share” algorithm to compute the 4x4 and 8x8 block size Int-DCT. The algorithm is later applied to those four video codecs. Our shared hardware approach ensures the maximum circuit reuse during the computation. The entire architecture is multiplier free and designed with only adders and shifters to minimize hardware cost and improve working frequency.
Finally, the design is implemented on a FPGA and later synthesized in CMOS 0.18um technology to compare the cost and performance with existing designs. The results show significant reduction in hardware cost and meet the requirements of real
time video coding applications.
|
130 |
Fast Intra/inter Mode Decision For A Real-time H.264 Streaming SystemAlay, Ozgu 01 July 2006 (has links) (PDF)
Video compression is a key technology used in several multimedia applications. Improvements in the compression techniques together with the increasing speed and optimized architecture of the new family processors enable us to use this technology more in real time systems. H.264 (also known as MPEG-4 Part 10 or AVC - Advanced Video Coding), is the latest video coding standard which is noted for achieving very high data compression. While H.264 is superior to its predecessors, it has a very high computational complexity which makes its costly for real time applications. Thus, in order to perform video encoding with satisfactory speed there is an obvious need for reducing the computational complexity. New algorithms were developed for this purpose. The developed algorithms were implemented on Texas Instrument TMS320C64x family to be able to fulfill the requirement in optimized signal processing hardware with low power consumption which arises from the computational complexity and the need for portable devices in video processing technology. With the new algorithms developed, a computation reduction of 55% was achieved without loosing perceptual image quality. Furthermore, the algorithms were implemented on a DSP along with the networking functionality to obtain a video streaming system. The final system may be used in a wide range of fields from surveillance systems to mobile systems.
|
Page generated in 0.0299 seconds