Spelling suggestions: "subject:"scalable video"" "subject:"scalable ideo""
21 |
Cross Layer Design for Video Streaming over 4G Networks Using SVCRadhakrishna, Rakesh January 2012 (has links)
Fourth Generation (4G) cellular technology Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) offers high data rate capabilities to mobile users; and, operators are trying to deliver a true mobile broadband experience over LTE networks. Mobile TV and Video on Demand (VoD) are expected to be the main revenue generators in the near future [36] and efficient video streaming over wireless is the key to enabling this. 3GPP recommends the use of H.264 baseline profiles for all video based services in Third Generation (3G) Universal Mobile Telecommunication System (UMTS) networks. However, LTE networks need to support mobile devices with different display resolution requirements like small resolution mobile phones and high resolution laptops. Scalable Video Coding (SVC) is required to achieve this goal. Feasibility study of SVC for LTE is one of the main agenda of 3GPP Release10. SVC enhances H.264 with a set of new profiles and encoding tools that may be used to produce scalable bit streams. Efficient adaptation methods for SVC video transmission over LTE networks are proposed in this thesis. Advantages of SVC over H.264 are analyzed using real time use cases of mobile video streaming. Further, we study the cross layer adaptation and scheduling schemes for delivering SVC video streams most efficiently to the users in LTE networks in unicast and multicast transmissions. We propose SVC based video streaming scheme for unicast and multicast transmissions in the downlink direction, with dynamic adaptations and a scheduling scheme based on channel quality information from users. Simulation results indicate improved video quality for more number of users in the coverage area and efficient spectrum usage with the proposed methods.
|
22 |
Multimedia data dissemination in opportunistic systems / Diffusion multimédia de données dans des systèmes opportunistesKlaghstan, Merza 01 December 2016 (has links)
Les réseaux opportunistes sont des réseaux mobiles qui se forment spontanément et de manière dynamique grâce à un ensemble d'utilisateurs itinérants dont le nombre et le déplacement ne sont pas prévisibles. En conséquence, la topologie et la densité de tels réseaux évoluent sans cesse. La diffusion de bout-en-bout d'informations, dans ce contexte, est incertaine du fait de la forte instabilité des liens réseaux point à point entre les utilisateurs. Les travaux qui en ont envisagé l'usage visent pour la plupart des applications impliquant l'envoi de message de petite taille. Cependant, la transmission de données volumineuses telles que les vidéos représente une alternative très pertinente aux réseaux d'infrastructure, en cas d'absence de réseau, de coût important ou pour éviter la censure d'un contenu. La diffusion des informations de grande taille en général et de vidéos en particulier dans des réseaux oppnets constitue un challenge important. En effet, permettre, dans un contexte réseau très incertain et instable, au destinataire d’une vidéo de prendre connaissance au plus vite du contenu de celle-ci, avec la meilleure qualité de lecture possible et en encombrant le moins possible le réseau reste un problème encore très largement ouvert. Dans cette thèse, nous proposons un nouveau mécanisme de diffusion de vidéos dans un réseau opportuniste de faible densité, visant à améliorer le temps d'acheminement de la vidéo tout en réduisant le délai de lecture à destination. La solution proposée se base sur le choix d'encoder la vidéo en utilisant l'encodage SVC, grâce auquel la vidéo se décline en un ensemble de couches interdépendantes (layers), chacune améliorant la précédente soit en terme de résolution, soit en terme de densité, soit en terme de perception visuelle. Notre solution se décline en trois contributions. La première consiste à proposer une adaptation du mécanisme de diffusion Spray-and-Wait, avec comme unités de diffusion, les couches produites par SVC. Les couches sont ainsi diffusées avec un niveau de redondance propre à chacune, adapté à leur degré d'importance dans la diffusion de la vidéo. Notre seconde contribution consiste à améliorer le mécanisme précédent en prenant en compte une granularité plus fine et adaptative en fonction de l'évolution de la topologie du réseau. Cette amélioration a la particularité de ne pas engendrer de coût de partitionnement, les couches vidéos dans l'encodage SVC étant naturellement déclinées en petites unités (NALU) à base desquelles l'unité de transfert sera calculée. Enfin, la troisième contribution de cette thèse consiste à proposer un mécanisme hybride de complétion des couches vidéos arrivées incomplètes à destination. Cette méthode se caractérise par le fait d'être initiée par le destinataire. Elle combine un protocole de demande des parties manquantes aux usagers proches dans le réseau et des techniques de complétion de vidéo à base d’opérations sur les frames constituant la vidéo. / Opportunistic networks are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution that encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver node is pushed into action by reacting to missing data parts in a composite backward loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts.
|
23 |
Logical Superposition Coded Modulation for Wireless Video MulticastingHo, James Ching-Chih January 2009 (has links)
This thesis documents the design of logical superposition coded (SPC) modulation for implementation in wireless video multicast systems, to tackle the issues caused by multi-user channel diversity, one of the legacy problems due to the nature of wireless video multicasting. The framework generates a logical SPC modulated signal by mapping successively refinable information bits into a single signal constellation with modifications in the MAC-layer software. The transmitted logical SPC signals not only manipulatively mimic SPC signals generated by the superposition of multiple modulated signals in the conventional hardware-based SPC modulation, but also yield comparable performance gains when provided with the knowledge of information bits dependencies and receiver channel distributions. At the receiving end, the proposed approach only requires simple modifications in the MAC layer software, which demonstrates full decoding compatibility with the conventional multi-stage signal-interference cancellation (SIC) approach involving additional hardware devices. Generalized formulations for symbol error rate (SER) are derived for performance evaluations and comparisons with the conventional hardware-based approach.
|
24 |
Compressive Sensing for 3D Data Processing Tasks: Applications, Models and AlgorithmsJanuary 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.
|
25 |
Logical Superposition Coded Modulation for Wireless Video MulticastingHo, James Ching-Chih January 2009 (has links)
This thesis documents the design of logical superposition coded (SPC) modulation for implementation in wireless video multicast systems, to tackle the issues caused by multi-user channel diversity, one of the legacy problems due to the nature of wireless video multicasting. The framework generates a logical SPC modulated signal by mapping successively refinable information bits into a single signal constellation with modifications in the MAC-layer software. The transmitted logical SPC signals not only manipulatively mimic SPC signals generated by the superposition of multiple modulated signals in the conventional hardware-based SPC modulation, but also yield comparable performance gains when provided with the knowledge of information bits dependencies and receiver channel distributions. At the receiving end, the proposed approach only requires simple modifications in the MAC layer software, which demonstrates full decoding compatibility with the conventional multi-stage signal-interference cancellation (SIC) approach involving additional hardware devices. Generalized formulations for symbol error rate (SER) are derived for performance evaluations and comparisons with the conventional hardware-based approach.
|
26 |
Arquitetura de co-projeto hardware/software para implementação de um codificador de vídeo escalável padrão H.264/SVCHusemann, Ronaldo January 2011 (has links)
Visando atuação flexível em redes heterogêneas, modernos sistemas multimídia podem adotar o conceito da codificação escalável, onde o fluxo de vídeo é composto por múltiplas camadas, cada qual complementando e aprimorando gradualmente as características de exibição, de forma adaptativa às capacidades de cada receptor. Atualmente, a especificação H.264/SVC representa o estado da arte da área, por sua eficiência de codificação aprimorada, porém demanda recursos computacionais extremamente elevados. Neste contexto, o presente trabalho apresenta uma arquitetura de projeto colaborativo de hardware e software, que explora as características dos diversos algoritmos internos do codificador H.264/SVC, buscando um adequado balanceamento entre as duas tecnologias (hardware e software) para a implementação prática de um codificador escalável de até 16 camadas em formato de 1920x1080 pixels. A partir de um modelo do código de referência H.264/SVC, refinado para reduzir tempos de codificação, foram definidas estratégias de particionamento de módulos e integração entre entidades de software e hardware, avaliando-se questões como dependência de dados e potencial de paralelismo dos algoritmos, assim como restrições práticas das interfaces de comunicação e acessos à memória. Em hardware foram implementados módulos de transformadas, quantização, filtro anti-blocagem e predição entre camadas, permanecendo em software funções de gerência do sistema, entropia, controle de taxa e interface com usuário. A solução completa obtida, integrando módulos em hardware, sintetizados em uma placa de desenvolvimento, com o software de referência refinado, comprova a validade da proposta, pelos significativos ganhos de desempenho registrados, mostrando-se como uma solução adequada para aplicações que exijam codificação escalável tempo real. / In order to support heterogeneous networks and distinct devices simultaneously, modern multimedia systems can adopt the scalability concept, when the video stream is composed by multiple layers, each one being responsible for gradually enhance the video exhibition quality, according to specific receiver capabilities. Currently the H.264/SVC specification can be considered the state-of-art in this area, by improving the coding efficiency, but, in the other hand, impacting in extremely high computational demands. Based on that, this work presents a hardware/software co-design architecture, which explores the characteristics of H.264/SVC internal algorithms, aiming the right balancing between both technologies (hardware and software) in order to generate a practical scalable encoder implementation, able to process up to 16 layers in 1920x1080 pixels format. Based in an H.264/SVC reference code model, which was refined in order to reduce global encoding time, the approaches for module partitioning and data integration between hardware and software were defined. The proposed methodology took into account characteristics like data dependency and inherent possibility of parallelism, as well practical restrictions like influence of communication interfaces and memory accesses. Particularly, the modules of transforms, quantization, deblocking and inter-layer prediction were implemented in hardware, while the functions of system management, entropy, rate control and user interface were kept in software. The whole solution, which was obtained integrating hardware modules, synthesized in a development board, with the refined H.264/SVC reference code, validates the proposal, by the significant performance gains registered, indicating it as an adequate solution for applications which require real-time video scalable coding.
|
27 |
Avaliação subjetiva de qualidade aplicada à codificação de vídeo escalável / Subjective video quality assessment applied to scalable video codingDaronco, Leonardo Crauss January 2009 (has links)
Os constantes avanços nas áreas de transmissão e processamento de dados ao longo dos últimos anos permitiram a criação de diversas aplicações e serviços baseados em dados multimídia, como streaming de vídeo, videoconferências, aulas remotas e IPTV. Além disso, avanços nas demais áreas da computação e engenharias, possibilitaram a construção de uma enorme diversidade de dispositivos de acesso a esses serviços, desde computadores pessoais até celulares, para citar os mais utilizados atualmente. Muitas dessas aplicações e dispositivos estão amplamente difundidos hoje em dia, e, ao mesmo tempo em que a tecnologia avança, os usuários tornam-se mais exigentes, buscando sempre melhor qualidade nos serviços que utilizam. Devido à grande variedade de redes e dispositivos atuais, uma dificuldade existente é possibilitar o acesso universal a uma transmissão. Uma alternativa criada é utilizar transmissão de vídeo escalável com IP multicast e controlada por mecanismos para adaptabilidade e controle de congestionamento. O produto final dessas transmissões mulimídia são os próprios dados multimídia (vídeo e áudio, principalmente) que o usuário está recebendo, portanto a qualidade destes dados é fundamental para um bom desempenho do sistema e satisfação dos usuários. Este trabalho apresenta um estudo de avaliações subjetivas de qualidade aplicadas em sequências de vídeo codificadas através da extensão escalável do padrão H.264 (SVC). Foi executado um conjunto de testes para avaliar, principalmente, os efeitos da instabilidade da transmissão (variação do número de camadas de vídeo recebidas) e a influência dos três métodos de escalabilidade (espacial, temporal e de qualidade) na qualidade dos vídeos. As definições foram baseadas em um sistema de transmissão em camadas com utilização de protocolos para adaptabilidade e controle de congestionamento. Para execução das avaliações subjetivas foi feito o uso da metodologia ACR-HRR e recomendações das normas ITU-R Rec. BT.500 e ITU-T Rec. P.910. Os resultados mostram que, diferente do esperado, a instabilidade não provoca grandes alterações na qualidade subjetiva dos vídeos e que o método de escalabilidade temporal tende a apresentar qualidade bastante inferior aos outros métodos. As principais contribuições deste trabalho estão nos resultados obtidos nas avaliações, além da metodologia utilizada durante o desenvolvimento do trabalho (definição do plano de avaliação, uso das ferramentas como o JSVM, seleção do material de teste, execução das avaliações, entre outros), das aplicações desenvolvidas, da definição de alguns trabalhos futuros e de possíveis objetivos para avaliações de qualidade. / The constant advances in multimedia processing and transmission over the past years have enabled the creation of several applications and services based on multimedia data, such as video streaming, teleconference, remote classes and IPTV. Futhermore, a big variety of devices, that goes from personal computers to mobile phones, are now capable of receiving these transmissions and displaying the multimedia data. Most of these applications are widely adopted nowadays and, at the same time the technology advances, the user are becoming more demanding about the quality of the services they use. Given the diversity of devices and networks available today, one of the big challenges of these multimedia systems is to be able to adapt the transmission to the receivers' characteristics and conditions. A suitable solution to provide this adaptation is the integration of scalable video coding with layered transmission. As the final product in these multimedia systems are the multimedia data that is presented to the user, the quality of these data will define the performace of the system and the users' satisfaction. This paper presents a study of subjective quality of scalable video sequences, coded using the scalable extension of the H.264 standard (SVC). A group of experiments was performed to measure, primarily, the efeects that the transmission instability (variations in the number of video layers received) has in the video quality and the relationship between the three scalability methods (spatial, temporal and quality) in terms of subjective quality. The decisions taken to model the tests were based on layered transmission systems that use protocols for adaptability and congestion control. To run the subjective assessments we used the ACR-HRR methodology and recommendations given by ITU-R Rec. BT.500 and ITU-T Rec. P.910. The results show that the instability modelled does not causes significant alterations on the overall video subjective quality if compared to a stable video and that the temporal scalability usually produces videos with worse quality than the spatial and quality methods, the latter being the one with the better quality. The main contributions presented in this work are the results obtained in the subjective assessments. Moreover, are also considered as contributions the methodology used throughout the entire work (including the test plan definition, the use of tools as JSVM, the test material selection and the steps taken during the assessment), some applications that were developed, the definition of future works and the specification of some problems that can also be solved with subjective quality evaluations.
|
28 |
Avaliação subjetiva de qualidade aplicada à codificação de vídeo escalável / Subjective video quality assessment applied to scalable video codingDaronco, Leonardo Crauss January 2009 (has links)
Os constantes avanços nas áreas de transmissão e processamento de dados ao longo dos últimos anos permitiram a criação de diversas aplicações e serviços baseados em dados multimídia, como streaming de vídeo, videoconferências, aulas remotas e IPTV. Além disso, avanços nas demais áreas da computação e engenharias, possibilitaram a construção de uma enorme diversidade de dispositivos de acesso a esses serviços, desde computadores pessoais até celulares, para citar os mais utilizados atualmente. Muitas dessas aplicações e dispositivos estão amplamente difundidos hoje em dia, e, ao mesmo tempo em que a tecnologia avança, os usuários tornam-se mais exigentes, buscando sempre melhor qualidade nos serviços que utilizam. Devido à grande variedade de redes e dispositivos atuais, uma dificuldade existente é possibilitar o acesso universal a uma transmissão. Uma alternativa criada é utilizar transmissão de vídeo escalável com IP multicast e controlada por mecanismos para adaptabilidade e controle de congestionamento. O produto final dessas transmissões mulimídia são os próprios dados multimídia (vídeo e áudio, principalmente) que o usuário está recebendo, portanto a qualidade destes dados é fundamental para um bom desempenho do sistema e satisfação dos usuários. Este trabalho apresenta um estudo de avaliações subjetivas de qualidade aplicadas em sequências de vídeo codificadas através da extensão escalável do padrão H.264 (SVC). Foi executado um conjunto de testes para avaliar, principalmente, os efeitos da instabilidade da transmissão (variação do número de camadas de vídeo recebidas) e a influência dos três métodos de escalabilidade (espacial, temporal e de qualidade) na qualidade dos vídeos. As definições foram baseadas em um sistema de transmissão em camadas com utilização de protocolos para adaptabilidade e controle de congestionamento. Para execução das avaliações subjetivas foi feito o uso da metodologia ACR-HRR e recomendações das normas ITU-R Rec. BT.500 e ITU-T Rec. P.910. Os resultados mostram que, diferente do esperado, a instabilidade não provoca grandes alterações na qualidade subjetiva dos vídeos e que o método de escalabilidade temporal tende a apresentar qualidade bastante inferior aos outros métodos. As principais contribuições deste trabalho estão nos resultados obtidos nas avaliações, além da metodologia utilizada durante o desenvolvimento do trabalho (definição do plano de avaliação, uso das ferramentas como o JSVM, seleção do material de teste, execução das avaliações, entre outros), das aplicações desenvolvidas, da definição de alguns trabalhos futuros e de possíveis objetivos para avaliações de qualidade. / The constant advances in multimedia processing and transmission over the past years have enabled the creation of several applications and services based on multimedia data, such as video streaming, teleconference, remote classes and IPTV. Futhermore, a big variety of devices, that goes from personal computers to mobile phones, are now capable of receiving these transmissions and displaying the multimedia data. Most of these applications are widely adopted nowadays and, at the same time the technology advances, the user are becoming more demanding about the quality of the services they use. Given the diversity of devices and networks available today, one of the big challenges of these multimedia systems is to be able to adapt the transmission to the receivers' characteristics and conditions. A suitable solution to provide this adaptation is the integration of scalable video coding with layered transmission. As the final product in these multimedia systems are the multimedia data that is presented to the user, the quality of these data will define the performace of the system and the users' satisfaction. This paper presents a study of subjective quality of scalable video sequences, coded using the scalable extension of the H.264 standard (SVC). A group of experiments was performed to measure, primarily, the efeects that the transmission instability (variations in the number of video layers received) has in the video quality and the relationship between the three scalability methods (spatial, temporal and quality) in terms of subjective quality. The decisions taken to model the tests were based on layered transmission systems that use protocols for adaptability and congestion control. To run the subjective assessments we used the ACR-HRR methodology and recommendations given by ITU-R Rec. BT.500 and ITU-T Rec. P.910. The results show that the instability modelled does not causes significant alterations on the overall video subjective quality if compared to a stable video and that the temporal scalability usually produces videos with worse quality than the spatial and quality methods, the latter being the one with the better quality. The main contributions presented in this work are the results obtained in the subjective assessments. Moreover, are also considered as contributions the methodology used throughout the entire work (including the test plan definition, the use of tools as JSVM, the test material selection and the steps taken during the assessment), some applications that were developed, the definition of future works and the specification of some problems that can also be solved with subjective quality evaluations.
|
29 |
Arquitetura de co-projeto hardware/software para implementação de um codificador de vídeo escalável padrão H.264/SVCHusemann, Ronaldo January 2011 (has links)
Visando atuação flexível em redes heterogêneas, modernos sistemas multimídia podem adotar o conceito da codificação escalável, onde o fluxo de vídeo é composto por múltiplas camadas, cada qual complementando e aprimorando gradualmente as características de exibição, de forma adaptativa às capacidades de cada receptor. Atualmente, a especificação H.264/SVC representa o estado da arte da área, por sua eficiência de codificação aprimorada, porém demanda recursos computacionais extremamente elevados. Neste contexto, o presente trabalho apresenta uma arquitetura de projeto colaborativo de hardware e software, que explora as características dos diversos algoritmos internos do codificador H.264/SVC, buscando um adequado balanceamento entre as duas tecnologias (hardware e software) para a implementação prática de um codificador escalável de até 16 camadas em formato de 1920x1080 pixels. A partir de um modelo do código de referência H.264/SVC, refinado para reduzir tempos de codificação, foram definidas estratégias de particionamento de módulos e integração entre entidades de software e hardware, avaliando-se questões como dependência de dados e potencial de paralelismo dos algoritmos, assim como restrições práticas das interfaces de comunicação e acessos à memória. Em hardware foram implementados módulos de transformadas, quantização, filtro anti-blocagem e predição entre camadas, permanecendo em software funções de gerência do sistema, entropia, controle de taxa e interface com usuário. A solução completa obtida, integrando módulos em hardware, sintetizados em uma placa de desenvolvimento, com o software de referência refinado, comprova a validade da proposta, pelos significativos ganhos de desempenho registrados, mostrando-se como uma solução adequada para aplicações que exijam codificação escalável tempo real. / In order to support heterogeneous networks and distinct devices simultaneously, modern multimedia systems can adopt the scalability concept, when the video stream is composed by multiple layers, each one being responsible for gradually enhance the video exhibition quality, according to specific receiver capabilities. Currently the H.264/SVC specification can be considered the state-of-art in this area, by improving the coding efficiency, but, in the other hand, impacting in extremely high computational demands. Based on that, this work presents a hardware/software co-design architecture, which explores the characteristics of H.264/SVC internal algorithms, aiming the right balancing between both technologies (hardware and software) in order to generate a practical scalable encoder implementation, able to process up to 16 layers in 1920x1080 pixels format. Based in an H.264/SVC reference code model, which was refined in order to reduce global encoding time, the approaches for module partitioning and data integration between hardware and software were defined. The proposed methodology took into account characteristics like data dependency and inherent possibility of parallelism, as well practical restrictions like influence of communication interfaces and memory accesses. Particularly, the modules of transforms, quantization, deblocking and inter-layer prediction were implemented in hardware, while the functions of system management, entropy, rate control and user interface were kept in software. The whole solution, which was obtained integrating hardware modules, synthesized in a development board, with the refined H.264/SVC reference code, validates the proposal, by the significant performance gains registered, indicating it as an adequate solution for applications which require real-time video scalable coding.
|
30 |
Arquitetura de co-projeto hardware/software para implementação de um codificador de vídeo escalável padrão H.264/SVCHusemann, Ronaldo January 2011 (has links)
Visando atuação flexível em redes heterogêneas, modernos sistemas multimídia podem adotar o conceito da codificação escalável, onde o fluxo de vídeo é composto por múltiplas camadas, cada qual complementando e aprimorando gradualmente as características de exibição, de forma adaptativa às capacidades de cada receptor. Atualmente, a especificação H.264/SVC representa o estado da arte da área, por sua eficiência de codificação aprimorada, porém demanda recursos computacionais extremamente elevados. Neste contexto, o presente trabalho apresenta uma arquitetura de projeto colaborativo de hardware e software, que explora as características dos diversos algoritmos internos do codificador H.264/SVC, buscando um adequado balanceamento entre as duas tecnologias (hardware e software) para a implementação prática de um codificador escalável de até 16 camadas em formato de 1920x1080 pixels. A partir de um modelo do código de referência H.264/SVC, refinado para reduzir tempos de codificação, foram definidas estratégias de particionamento de módulos e integração entre entidades de software e hardware, avaliando-se questões como dependência de dados e potencial de paralelismo dos algoritmos, assim como restrições práticas das interfaces de comunicação e acessos à memória. Em hardware foram implementados módulos de transformadas, quantização, filtro anti-blocagem e predição entre camadas, permanecendo em software funções de gerência do sistema, entropia, controle de taxa e interface com usuário. A solução completa obtida, integrando módulos em hardware, sintetizados em uma placa de desenvolvimento, com o software de referência refinado, comprova a validade da proposta, pelos significativos ganhos de desempenho registrados, mostrando-se como uma solução adequada para aplicações que exijam codificação escalável tempo real. / In order to support heterogeneous networks and distinct devices simultaneously, modern multimedia systems can adopt the scalability concept, when the video stream is composed by multiple layers, each one being responsible for gradually enhance the video exhibition quality, according to specific receiver capabilities. Currently the H.264/SVC specification can be considered the state-of-art in this area, by improving the coding efficiency, but, in the other hand, impacting in extremely high computational demands. Based on that, this work presents a hardware/software co-design architecture, which explores the characteristics of H.264/SVC internal algorithms, aiming the right balancing between both technologies (hardware and software) in order to generate a practical scalable encoder implementation, able to process up to 16 layers in 1920x1080 pixels format. Based in an H.264/SVC reference code model, which was refined in order to reduce global encoding time, the approaches for module partitioning and data integration between hardware and software were defined. The proposed methodology took into account characteristics like data dependency and inherent possibility of parallelism, as well practical restrictions like influence of communication interfaces and memory accesses. Particularly, the modules of transforms, quantization, deblocking and inter-layer prediction were implemented in hardware, while the functions of system management, entropy, rate control and user interface were kept in software. The whole solution, which was obtained integrating hardware modules, synthesized in a development board, with the refined H.264/SVC reference code, validates the proposal, by the significant performance gains registered, indicating it as an adequate solution for applications which require real-time video scalable coding.
|
Page generated in 0.0514 seconds