• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 13
  • 7
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Energy-efficient memory architecture design and management for parallel video coding / Projeto e gerenciamento de arquitetura de memória energeticamente eficiente para codificadores de vídeo HEVC

Sampaio, Felipe Martin January 2018 (has links)
Esta tese de doutorado apresenta o projeto de uma arquitetura de memória híbrida energeticamente eficiente baseada em memórias do tipo scratchpad (Hy-SVM) para a codificação paralela de vídeos segundo o padrão HEVC. A codificação de vídeo se destaca como uma parte extremamente complexa nas aplicações de processamento de vídeo. O padrão HEVC traz inovações que complicam fortemente os requerimentos de memória de tais aplicações, principalmente devido a: (a) novas estruturas de codificação, as quais agravam a complexidade computacional por proporcionarem muitas modos possíveis de codificação que devem ser analisados; além do (b) suporte de alto nível à paralelização da codificação por meio do particionamento das unidades de codificação em múltiplos Tiles, o qual provê a aceleração da performance dos codificadores, porém, ao mesmo tempo, adiciona grandes desafios à infraestrutura de memória. O principal gargalo em termos de comunicação com a memória externa e de armazenamento interno (dentro do chip do codificador) é dados pelas informações dos quadros de referência: que consiste em uma série de quadros completos já codificados (e reconstruídos) que devem ser mantidos em memória e acessados de forma intensa durante o processamento dos quadros futuros. Devido ao grande volume de dados que são necessários para representar os quadros de referência, estes são tipicamente armazenados na memória externa dos codificadores (principalmente quando vídeos de alta e ultra alta resolução são processados) A arquitetura proposta Hy-SVM está inserida em um sistema de codificação baseado no particionamento dos quadros do vídeo de entrada em múltiplos Tiles, de forma a habilitar a codificação paralela das informações segundo o padrão HEVC: neste cenário, cada Tile é assinalado para uma específica unidade de processamento do codificador HEVC, o qual executa o processamento dos diferentes Tiles em paralelo. A ideias chave da arquitetura Hy- SVM incluem: projeto e gerenciamento de memórias para a aplicação específica de codificação de vídeo; uso de múltiplos níveis de memórias privadas e compartilhadas, com o objetivo de explorar o reuso de dados intra-Tile e inter-Tiles de forma combinada; uso de memórias do tipo scratchpad (SPMs) para o armazenamento interno da informações de forma eficiente em termos de consumo de energia; projeto de memórias híbridas utilizando as tecnologias SRAM e STTRAM como base. Uma metodologia de projeto é proposta para a arquitetura Hy-SVM, a qual aproveita propriedades específicas da aplicação para, de forma adequada, definir os parâmetros de projeto das memórias híbridas. De forma a prover adaptação em tempo de execução (para ambas as memórias on-chip e off-chip), a arquitetura Hy-SVM integra uma camada de gerenciamento composta pelas seguintes estratégias (1) predição do overlap (sobreposição de acessos), o qual busca identificar o comportamento dos acessos redundantes entre diferentes unidades de processamento do codificador HEVC a partir da análise dos acessos à memória das codificações dos quadros passados do vídeo, com o objetivo de aumentar o potencial de exploração do reuso de dados inter-Tiles; (2) gerenciamento dos acessos à memória externa, responsável por balancear a vazão de dados com a memória acumulada entre as múltiplas unidades de processamento do codificador HEVC paralelo, com o objetivo de melhorar o uso do barramento de comunicação com a memória externa; e (3) gerenciamento de dados das SPMs implementadas a partir de células de memória STT-RAM, o qual alivia estas células de acessos de escrita com alta atividade de chaveamento dos bits armazenados, com o objetivo de aumentar o tempo de vide destas células, bem como reduzir as penalidades relativas à ineficiência dos acessos de escrita nas memórias STT-RAM. O conhecimento específico da aplicação foi utilizado nas estratégias de gerenciamento em tempo de execução das seguintes formas: explorando parâmetros da codificação HEVC e realizando monitorando em tempo real dos acessos à memória realizados pelo codificador Estas informações são utilizadas tanto pelas técnicas de gerenciamento, quanto pelas metodologias de projeto das memórias. Baseadas nas decisões tomadas pela camada de gerenciamento, a arquitetura Hy-SVM integra unidades de gerenciamento de acessos à memória (memory access management units – MAMUs) para controlar as dinâmicas de acesso das memórias SPM privadas e compartilhadas. Além disso, unidades adaptativas de gerenciamento de potência (adaptive power management units – APMUs) são capazes de reduzir o consumo de energia interno do chip do codificador a partir das estimativas precisas de formação dos overlaps. Os resultados obtidos por meio dos experimentos realizados demonstram economias de consumo energético da arquitetura Hy-SVM, quando comparada a trabalhos relacionados, sob diversos cenários de teste. Quando comparada a estratégias de reuso de dados tradicionais para codificadores de vídeo, como o esquema Level-C, a exploração do reuso de dados combinado nos níveis intra-Tile e inter-Tiles provê 69%-79% de redução de energia. Considerando as arquiteturas de memória de vídeo com foco no padrão HEVC, os ganhos variaram desde 2,8% (pior caso) até 67% (melhor caso) Da perspectiva do consumo de energia relacionado à comunicação com a memória externa, a arquitetura Hy-SVM é capaz de melhorar o reuso de dados (por explorar também o reuso de dados inter-Tiles), resultando em um consumo de energia on-chip 11%-17% menor. Além disso, as APMUs contribuem para reduzir o consumo de energia on-chip da arquitetura Hy-SVM em 56%-95%, para os cenários de teste analisados. Desta forma, comparada aos trabalhos relacionados, a arquitetura Hy-SVM apresenta o menor consumo energético on-chip. O gerenciamento da vazão da comunicação com a memória externa é capaz de reduzir as variações de largura de banda em 37%-83%, quando comparado à ordem tradicional de processamento, para cenários de teste com 4 e 16 Tiles sendo processados em paralelo pelo codificador HEVC. O gerenciamento de dados pôde, de forma significativa, estender o tempo de vida das células de memória STT-RAM, alcançando 0,83 de tempo de vida normalizado (métrica adotada para comparação, ficando muito próximo do caso ideal). Além disso, as sobrecargas causadas pela implementação das unidades de gerenciamento não afetam de foram significativa a performance e a eficiência energética da arquitetura Hy- SVM propostas por este trabalho. / This Thesis presents the design of an energy-efficient hybrid scratchpad video memory architecture (called Hy-SVM) for parallel High-Efficiency Video Coding. Video coding stands out as a high complex part in the video processing applications. HEVC standard brought innovations that increase the memory requirements, mainly due to: (a) the novel coding structures, which aggravates the computational complexity by providing a wider range of possibilities to be analyzed; and (b) the high-level parallelism features provided by the Tiles partitioning, which provides performance acceleration, but, at the same time, strongly adds hard challenges to the memory infrastructure. The main bottleneck in terms of external memory transmission and on-chip storage is the reference frames data: which consists of already coded (and reconstructed) entire frames that must be stored and intensively accessed during the encoding process of future frames. Due to the large volume of data required to represent the reference frames, they are typically stored in the external memory (especially when highdefinition videos are targeted). The proposed Hy-SVM architecture is inserted in a video coding system, which is based on multiple Tiles partitioning to enable parallel HEVC encoding: each Tile is assigned to a specific processing unit. The key ideas of Hy-SVM include: applicationspecific design and management; combined multiple levels of private and shared memories that jointly exploit intra-Tile and inter-Tiles data reuse; scratchpad memories (SPMs) as energyefficient on-chip data storage; combined SRAM and STT-RAM hybrid memory (HyM) design We propose a design methodology for Hy-SVM that leverages application-specific properties to properly define the HyMs parameters. In order to provide run-time adaptation (for both offand on-chip parts), Hy-SVM integrates a memory management layer composed of: (1) overlap prediction, which has the goal of identifying the redundant memory access behavior by analyzing monitored past frames encoding to increase inter-Tiles data reuse exploitation; (2) memory pressure management, which aims on balancing the Tiles-accumulated memory pressure targeting on improving external memory communication channel usage; and (3) lifetime-aware data management scheme that alleviates STT-RAM SPMs of high bit-toggling write accesses to increase the their cells lifetime, as well as to reduce overhead issues related to poor write characteristics of STT-RAM. Application-specific knowledge was exploited by inheriting HEVC properties and performing run-time monitoring of memory accesses. Such information is used to properly design the on-chip video memories, as well as being utilized as input parameters of the run-time memory management layer. Based on the run-time decisions from the proposed Hy-SVM management strategies, Hy-SVM integrates distributed memory access management units (MAMUs) to control the access dynamics of private and shared SPMs. Additionally, adaptive power management units (APMUs) are able to strongly reduce on-chip energy consumption due to an accurate overlap prediction The experimental results demonstrate Hy-SVM overall energy savings over related works under various HEVC encoding scenarios. Compared to traditional data reuse schemes, like Level-C, the combined intra-Tile and inter-Tiles data reuse provides 69%-79% of energy reduction. Regarding related HEVC video memory architectures, the savings varied from 2.8% (worst case) to 67% (best case). From the external memory perspective, Hy-SVM can improve data reuse (by also exploiting inter-Tiles data redundancy), resulting on 11%-71%% of reduced off-chip energy consumption. Additionally, our APMUs contribute by reducing on-chip energy consumption of Hy-SVM by 56%-95%, for the evaluated HEVC scenarios. Thus, compared to related works, Hy-SVM presents the lowest on-chip energy consumption. The memory pressure management scheme can reduce the variations in the memory bandwidth by 37%-83% when compared to the traditional raster scan processing for 4- and 16-core parallelized HEVC encoder. The lifetime-aware data management significantly extends the STT-RAM lifetime, achieving 0.83 of normalized lifetime (near to the optimal case). Moreover, the overhead of implementing our management units insignificantly affects the performance and energyefficiency of Hy-SVM.
12

Optimisation du codage HEVC par des moyens de pré-analyse et/ou pré-codage du contenu / HEVC encoder optimization with pre-analysis and/or pre-encoding of the video content

Dhollande, Nicolas 21 April 2016 (has links)
La compression vidéo HEVC standardisée en 2013 offre des gains de compression dépassant les 50% par rapport au standard de compression précédent MPEG4-AVC/H.264. Ces gains de compression se paient par une augmentation très importante de la complexité de codage. Si on ajoute à cela l’augmentation de complexité générée par l’accroissement de résolution et de fréquence image du signal vidéo d’entrée pour passer de la Haute Définition (HD) à l’Ultra Haute Définition (UHD), on comprend vite l’intérêt des techniques de réduction de complexité pour le développement de codeurs économiquement viables. En premier lieu, un effort particulier a été réalisé pour réduire la complexité des images Intra. Nous proposons une méthode d'inférence des modes de codage à partir d'un pré-codage d'une version réduite en HD de la vidéo UHD. Ensuite, nous proposons une méthode de partitionnement rapide basée sur la pré-analyse du contenu. La première méthode offre une réduction de complexité d'un facteur 3 et la deuxième, d'un facteur 6, contre une perte de compression proche de 5%. En second lieu, nous avons traité le codage des images Inter. En mettant en œuvre une solution d'inférence des modes de codage UHD à partir d'un pré-codage au format HD, la complexité de codage est réduite d’un facteur 3 en considérant les 2 flux produits et d’un facteur 9.2 sur le seul flux UHD, pour une perte en compression proche de 3%. Appliqué à une configuration de codage proche d'un système réellement déployé, l'apport de notre algorithme reste intéressant puisqu'il réduit la complexité de codage du flux UHD d’un facteur proche de 2 pour une perte de compression limitée à 4%. Les stratégies de réduction de complexité mises en œuvre au cours de cette thèse pour le codage Intra et Inter offrent des perspectives intéressantes pour le développement de codeurs HEVC UHD plus économes en ressources de calculs. Elles sont particulièrement adaptées au domaine de la WebTV/OTT qui prend une part croissante dans la diffusion de la vidéo et pour lequel le signal vidéo est codé à des résolutions multiples pour adresser des réseaux et des terminaux de capacités variées. / The High Efficiency Video Coding (HEVC) standard was released in 2013 which reduced network bandwidth by a factor of 2 compared to the prior standard H.264/AVC. These gains are achieved by a very significant increase in the encoding complexity. Especially with the industrial demand to shift in format from High Definition (HD) to Ultra High Definition (UHD), one can understand the relevance of complexity reduction techniques to develop cost-effective encoders. In our first contribution, we attempted new strategies to reduce the encoding complexity of Intra-pictures. We proposed a method with inference rules on the coding modes from the modes obtained with pre-encoding of the UHD video down-sampled in HD. We, then, proposed a fast partitioning method based on a pre-analysis of the content. The first method reduced the complexity by a factor of 3x and the second one, by a factor of 6, with a loss of compression efficiency of 5%. As a second contribution, we adressed the Inter-pictures. By implementing inference rules in the UHD encoder, from a HD pre-encoding pass, the encoding complexity is reduced by a factor of 3x when both HD and UHD encodings are considered, and by 9.2x on just the UHD encoding, with a loss of compression efficiency of 3%. Combined with an encoding configuration imitating a real system, our approach reduces the complexity by a factor of close to 2x with 4% of loss. These strategies built during this thesis offer encouraging prospects for implementation of low complexity HEVC UHD encoders. They are fully adapted to the WebTV/OTT segment that is playing a growing part in the video delivery, in which the video signal is encoded with different resolution to reach heterogeneous devices and network capacities.
13

Optimisation des techniques de compression d'images fixes et de vidéo en vue de la caractérisation des matériaux : applications à la mécanique / Optimization of compression techniques for still images and video for characterization of materials : mechanical applications

Eseholi, Tarek Saad Omar 17 December 2018 (has links)
Cette thèse porte sur l’optimisation des techniques de compression d'images fixes et de vidéos en vue de la caractérisation des matériaux pour des applications dans le domaine de la mécanique, et s’inscrit dans le cadre du projet de recherche MEgABIt (MEchAnic Big Images Technology) soutenu par l’Université Polytechnique Hauts-de-France. L’objectif scientifique du projet MEgABIt est d’investiguer dans l’aptitude à compresser de gros volumes de flux de données issues d’instrumentation mécanique de déformations à grands volumes tant spatiaux que fréquentiels. Nous proposons de concevoir des algorithmes originaux de traitement dans l’espace compressé afin de rendre possible au niveau calculatoire l’évaluation des paramètres mécaniques, tout en préservant le maximum d’informations fournis par les systèmes d’acquisitions (imagerie à grande vitesse, tomographie 3D). La compression pertinente de la mesure de déformation des matériaux en haute définition et en grande dynamique doit permettre le calcul optimal de paramètres morpho-mécaniques sans entraîner la perte des caractéristiques essentielles du contenu des images de surface mécaniques, ce qui pourrait conduire à une analyse ou une classification erronée. Dans cette thèse, nous utilisons le standard HEVC (High Efficiency Video Coding) à la pointe des technologies de compression actuelles avant l'analyse, la classification ou le traitement permettant l'évaluation des paramètres mécaniques. Nous avons tout d’abord quantifié l’impact de la compression des séquences vidéos issues d’une caméra ultra-rapide. Les résultats expérimentaux obtenus ont montré que des taux de compression allant jusque 100 :1 pouvaient être appliqués sans dégradation significative de la réponse mécanique de surface du matériau mesurée par l’outil d’analyse VIC-2D. Finalement, nous avons développé une méthode de classification originale dans le domaine compressé d’une base d’images de topographie de surface. Le descripteur d'image topographique est obtenu à partir des modes de prédiction calculés par la prédiction intra-image appliquée lors de la compression sans pertes HEVC des images. La machine à vecteurs de support (SVM) a également été introduite pour renforcer les performances du système proposé. Les résultats expérimentaux montrent que le classificateur dans le domaine compressé est robuste pour la classification de nos six catégories de topographies mécaniques différentes basées sur des méthodologies d'analyse simples ou multi-échelles, pour des taux de compression sans perte obtenus allant jusque 6: 1 en fonction de la complexité de l'image. Nous avons également évalué les effets des types de filtrage de surface (filtres passe-haut, passe-bas et passe-bande) et de l'échelle d'analyse sur l'efficacité du classifieur proposé. La grande échelle des composantes haute fréquence du profil de surface est la mieux appropriée pour classer notre base d’images topographiques avec une précision atteignant 96%. / This PhD. thesis focuses on the optimization of fixed image and video compression techniques for the characterization of materials in mechanical science applications, and it constitutes a part of MEgABIt (MEchAnic Big Images Technology) research project supported by the Polytechnic University Hauts-de-France (UPHF). The scientific objective of the MEgABIt project is to investigate the ability to compress large volumes of data flows from mechanical instrumentation of deformations with large volumes both in the spatial and frequency domain. We propose to design original processing algorithms for data processing in the compressed domain in order to make possible at the computational level the evaluation of the mechanical parameters, while preserving the maximum of information provided by the acquisitions systems (high-speed imaging, tomography 3D). In order to be relevant image compression should allow the optimal computation of morpho-mechanical parameters without causing the loss of the essential characteristics of the contents of the mechanical surface images, which could lead to wrong analysis or classification. In this thesis, we use the state-of-the-art HEVC standard prior to image analysis, classification or storage processing in order to make the evaluation of the mechanical parameters possible at the computational level. We first quantify the impact of compression of video sequences from a high-speed camera. The experimental results obtained show that compression ratios up to 100: 1 could be applied without significant degradation of the mechanical surface response of the material measured by the VIC-2D analysis tool. Then, we develop an original classification method in the compressed domain of a surface topography database. The topographical image descriptor is obtained from the prediction modes calculated by intra-image prediction applied during the lossless HEVC compression of the images. The Support vector machine (SVM) is also introduced for strengthening the performance of the proposed system. Experimental results show that the compressed-domain topographies classifier is robust for classifying the six different mechanical topographies either based on single or multi-scale analyzing methodologies. The achieved lossless compression ratios up to 6:1 depend on image complexity. We evaluate the effects of surface filtering types (high-pass, low-pass, and band-pass filter) and the scale of analysis on the efficiency of the proposed compressed-domain classifier. We verify that the high analysis scale of high-frequency components of the surface profile is more appropriate for classifying our surface topographies with accuracy of 96 %.

Page generated in 0.1125 seconds