11 |
3D multiple description coding for error resilience over wireless networksUmar, Abubakar Sadiq January 2011 (has links)
Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.
|
12 |
Performance Modeling of QoE-Aware Multipath Video Transmission in the Future Internet / Leistungsmodellierung einer Mehrpfad Video Übertragung im zukünftigen Internet unter Berücksichtigung der QoEZinner, Thomas January 2012 (has links) (PDF)
Internet applications are becoming more and more flexible to support diverge user demands and network conditions. This is reflected by technical concepts, which provide new adaptation mechanisms to allow fine grained adjustment of the application quality and the corresponding bandwidth requirements. For the case of video streaming, the scalable video codec H.264/SVC allows the flexible adaptation of frame rate, video resolution and image quality with respect to the available network resources. In order to guarantee a good user-perceived quality (Quality of Experience, QoE) it is necessary to adjust and optimize the video quality accurately. But not only have the applications of the current Internet changed. Within network and transport, new technologies evolved during the last years providing a more flexible and efficient usage of data transport and network resources. One of the most promising technologies is Network Virtualization (NV) which is seen as an enabler to overcome the ossification of the Internet stack. It provides means to simultaneously operate multiple logical networks which allow for example application-specific addressing, naming and routing, or their individual resource management. New transport mechanisms like multipath transmission on the network and transport layer aim at an efficient usage of available transport resources. However, the simultaneous transmission of data via heterogeneous transport paths and communication technologies inevitably introduces packet reordering. Additional mechanisms and buffers are required to restore the correct packet order and thus to prevent a disturbance of the data transport. A proper buffer dimensioning as well as the classification of the impact of varying path characteristics like bandwidth and delay require appropriate evaluation methods. Additionally, for a path selection mechanism real time evaluation mechanisms are needed. A better application-network interaction and the corresponding exchange of information enable an efficient adaptation of the application to the network conditions and vice versa. This PhD thesis analyzes a video streaming architecture utilizing multipath transmission and scalable video coding and develops the following optimization possibilities and results: Analysis and dimensioning methods for multipath transmission, quantification of the adaptation possibilities to the current network conditions with respect to the QoE for H.264/SVC, and evaluation and optimization of a future video streaming architecture, which allows a better interaction of application and network. / Die Applikationen im Internet passen sich immer besser an unterschiedliche Anforderungen der Nutzer und variierende Netzwerkbedingungen an. Neue Mechanismen ermöglichen die zielgerichtete Anpassung der Anwendungsqualität und damit der benötigten Bandbreite. Im Falle von Videostreaming ermöglicht der skalierbare Videocodec H.264/SVC, die flexible Veränderung der Bildwiederholungsrate, der Auflösung des Videos und der Bildqualität an die vorhandenen Ressourcen im Netzwerk. Um eine gute vom Nutzer erfahrene Dienstgüte (Quality of Experience, QoE) zu garantieren, muss die Videoqualität richtig angepasst und optimiert werden. Aber nicht nur die Anwendungen des heutigen Internets haben sich verändert. Gerade in den letzten Jahren entstanden neue Netzwerk- und Transporttechnologien, welche eine flexiblere und effizientere Nutzung der Kommunikationsnetze erlauben. Eine dieser Techniken ist die Virtualisierung von Netzwerken. Sie erlaubt es auf einem gemeinsamen physikalischen Netz verschiedene logische Netze zu betreiben, die zum Beispiel Anwendungs-abhängige Adressierung unterstützen, eigene Namensgebung erlauben oder ein individuelles Ressourcen Management ermöglichen. Neuartige Transportmechanismen wie Mehrpfadübertragung auf Netzwerk- und Transportebene des ISO/OSI Stacks streben eine effiziente Ausnutzung der zur Verfügung stehenden Übertragungsmöglichkeiten an. Doch die simultane Übertragung von Daten über heterogene Kommunikationspfade und –technologien führt unausweichlich zu einer Veränderung der Reihenfolge, in der die Pakete ankommen. Es werden zusätzliche Mechanismen und Puffer benötigt, um die ursprüngliche Paketreihenfolge wieder herzustellen und so einen störenden Einfluss auf den Datentransport zu verhindern. Die richtige Dimensionierung dieser Puffer sowie die Klassifizierung des Einflusses von variierenden Pfadparametern wie Bandbreite und Verzögerungen setzen passende Evaluierungsmethoden voraus. Darüber hinaus werden für die Auswahl von geeigneten Pfaden aus einer Menge vorhandener Pfade echtzeitfähige Bewertungsmechanismen benötigt. Eine bessere Interaktion zwischen Applikationen und Netzwerk und der damit verbundene Informationsaustausch ermöglicht die effiziente Anpassung der Applikationsqualität an das Netzwerk und umgekehrt. Diese Doktorarbeit analysiert eine auf Mehrpfadübertragung und skalierbarer Videokodierung basierende Videostreaming Architektur und erarbeitet die folgenden Optimierungsmöglichkeiten und Auswertungen: Analyse- und Dimensionierungsmethoden für Mehrpfadübertragung, Quantifizierung der Anpassungsmöglichkeiten von SVC an das Netzwerk unter Berücksichtigung der QoE und Evaluierung und Optimierung einer zukünftigen Videostreaming Architektur, welche eine stärkere Interaktion zwischen Applikation und Netzwerk ermöglicht.
|
13 |
An Efficient Motion Estimation Method for H.264-Based Video Transcoding with Arbitrary Spatial Resolution ConversionWang, Jiao January 2007 (has links)
As wireless and wired network connectivity is rapidly expanding
and the number of network users is steadily increasing, it has become more
and more important to support universal access of multimedia
content over the whole network. A big challenge, however, is
the great diversity of network devices from full screen computers
to small smart phones. This leads to research on transcoding,
which involves in efficiently reformatting compressed data from
its original high resolution to a desired spatial resolution
supported by the displaying device. Particularly, there is a
great momentum in the multimedia industry for H.264-based
transcoding as H.264 has been widely employed as a mandatory
player feature in applications ranging from television broadcast
to video for mobile devices.
While H.264 contains many new features for effective video
coding with excellent rate distortion (RD) performance, a major issue
for transcoding H.264 compressed video from one spatial resolution
to another is the computational complexity. Specifically, it is
the motion compensated prediction (MCP) part. MCP is the main
contributor to the excellent RD performance
of H.264 video compression, yet it is very time consuming. In general,
a brute-force search is used to find the best motion vectors for MCP.
In the scenario of transcoding, however, an immediate idea for
improving the MCP efficiency for the re-encoding procedure is to
utilize the motion vectors in the original compressed stream.
Intuitively, motion in the high resolution scene is highly related
to that in the down-scaled scene.
In this thesis, we study homogeneous video transcoding from H.264
to H.264. Specifically, for the video transcoding with arbitrary
spatial resolution conversion, we propose a motion vector estimation
algorithm based on a multiple linear regression model, which
systematically utilizes the motion information in the original scenes.
We also propose a practical solution for efficiently determining a
reference frame to take the advantage of the new feature of multiple
references in H.264. The performance of the algorithm was assessed
in an H.264 transcoder. Experimental results show that, as compared
with a benchmark solution, the proposed method significantly reduces
the transcoding complexity without degrading much the video quality.
|
14 |
An Efficient Motion Estimation Method for H.264-Based Video Transcoding with Arbitrary Spatial Resolution ConversionWang, Jiao January 2007 (has links)
As wireless and wired network connectivity is rapidly expanding
and the number of network users is steadily increasing, it has become more
and more important to support universal access of multimedia
content over the whole network. A big challenge, however, is
the great diversity of network devices from full screen computers
to small smart phones. This leads to research on transcoding,
which involves in efficiently reformatting compressed data from
its original high resolution to a desired spatial resolution
supported by the displaying device. Particularly, there is a
great momentum in the multimedia industry for H.264-based
transcoding as H.264 has been widely employed as a mandatory
player feature in applications ranging from television broadcast
to video for mobile devices.
While H.264 contains many new features for effective video
coding with excellent rate distortion (RD) performance, a major issue
for transcoding H.264 compressed video from one spatial resolution
to another is the computational complexity. Specifically, it is
the motion compensated prediction (MCP) part. MCP is the main
contributor to the excellent RD performance
of H.264 video compression, yet it is very time consuming. In general,
a brute-force search is used to find the best motion vectors for MCP.
In the scenario of transcoding, however, an immediate idea for
improving the MCP efficiency for the re-encoding procedure is to
utilize the motion vectors in the original compressed stream.
Intuitively, motion in the high resolution scene is highly related
to that in the down-scaled scene.
In this thesis, we study homogeneous video transcoding from H.264
to H.264. Specifically, for the video transcoding with arbitrary
spatial resolution conversion, we propose a motion vector estimation
algorithm based on a multiple linear regression model, which
systematically utilizes the motion information in the original scenes.
We also propose a practical solution for efficiently determining a
reference frame to take the advantage of the new feature of multiple
references in H.264. The performance of the algorithm was assessed
in an H.264 transcoder. Experimental results show that, as compared
with a benchmark solution, the proposed method significantly reduces
the transcoding complexity without degrading much the video quality.
|
15 |
The implementation of H.264 algorithm with parallel extended MMX instruction setShen, Cheng-Ying 20 August 2008 (has links)
The H.264 Protocol is an important method for the multimedia transmission and calculation, but it is difficult to work smoothly on the embedded systems because of the low clock in the working environment of the embedded system .Although many new multimedia instruction sets have been developed, the immediate multimedia calculation is still difficult to implement on the embedded system.
So this paper uses the ¡§Multimedia Operation Register¡¨, a SIMD architecture, to implement H.264 algorithm on the embedded system to improve the performance of handling multimedia calculation. Multimedia Operation Register, which performs the parallel execution of the multi-data-streaming, uses the bit slice concept to design operation pair combining bit storage cell and bit computation. According to the characteristic , which is the address having constant distance between more than two data being used saved in the Memory, this paper using the striping addressing mode , which can cooperate with the parallel execution of multi-data-streaming , to load the data having strode addresses from the Memory in one instructions. On the other hand, this paper designs a new instruction set based on the Intel MMX instruction set and the operation feature of multimedia calculation.
When a designer uses single-data-steaming to implement the H.264 Protocol by the multimedia instruction sets, he will use more interactions to do the same thing in every block. Now this paper can use fewer interactions to do the same thing because the Multimedia Operation Register can use the parallel execution of the multi-data-stream to calculate the data in many different blocks to implement H.264 Protocol at the same time. On the other hand, this paper can reallocate the number of the registers to the arithmetic unit which will be used smartly by changing the working mode. This paper also saves much execution time of some actions such as the transpose of the matrix, the data resorting and the SAD (Sum of Absolute Differences) calculation by using new instructions. In order to reduce the times of memory access, this paper uses the method which rotates the data between two registers to let the data been used as possible as it can. So the coding efficiency can be improved explosively by using all the methods which have been introduced.
The conclusion in this paper shows that the parallel execution of the multi-data-streaming will be a very important method to handle multimedia calculation. And this paper advances an innovative architecture to implement the parallel execution of the multi-data- streaming. According to the simulation in 5th chapter, the speedup of handling H.264 Protocol by Multimedia Operation Register is more than four times with MMX instruction set. In the SAD calculation, it even can have ten times advanced then MMX instruction set. At last the efficacy is even better than the latest multimedia instruction set -¡§SSE4¡¨.
|
16 |
Adaptive Frame Structure Determination for Hierarchical B Frame CodingLai, Chung-Ping 09 September 2009 (has links)
The hierarchical B picture coding is introduced into the extension of H.264/AVC in order to improve coding performance and provide temporal scalability as well. In general, coding performance is affected by the content variation in each GOP (Group of Picture). Therefore, the ways to determine the size of sun-GOP is a critical problem for video coding. In this thesis, the adaptive GOP structure determination scheme is proposed to select the appropriate sub-GOP size with content complexity consideration. We compute the frame difference by hierarchical B picture structure and use the information to be a basis of sub-GOP decision. Hence, we can get proper combination of sub-GOP. Experimental results show the RD curves that our proposed method compares with the fixed GOP setting in the existing hierarchical B picture coding of SVC.
|
17 |
Implementation of real-time DIS H.264 Encoder for Airborne RecorderNam, Ju-Hun, Kim, Seong-Jong, Kim, Sung-Min, Lee, Nam-Sik, Kim, Jin-Hyung 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / When developing a video compression system in black box for aircraft, it is necessary to consider the characteristic of the images and the surrounding environment. The images captured in and out of aircraft have excessive movement-related issues, which make the results difficult to analyze and interpret. Failure to remove the tremors in the video component inevitably leads to poor compression efficiency and degrades the video imaging performance in the airborne black box. Therefore, it is necessary to develop a Compression System which can stabilize the video-image and efficiently utilize high compression recording for aircraft without special hardware. Based on the current situation, we suggest a real-time electronic video stabilization algorithm for airborne recorder which recovers shaky images simply and efficiently to work beside a developed stabilization system based on the H.264 Encoder using DSP.
|
18 |
Digital Surveillance Based on Video CODEC System-on-a-Chip (SoC) PlatformsZhao, Wei 04 November 2010 (has links)
Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today’s surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.
|
19 |
Protection of Scalable Video by Encryption and Watermarking / Protection des Vidéos Hiérarchiques par Cryptage et TatouageShahid, Muhammad Zafar Javed 08 October 2010 (has links)
Le champ du traitement des images et des vidéos attire l'attention depuis les deux dernières décennies. Ce champ couvre maintenant un spectre énorme d'applications comme la TV 3D, la télé-surveillance, la vision par ordinateur, l'imagerie médicale, la compression, la transmission, etc. En ce début de vingt et unième siècle nous sommes témoins d'une révolution importante. Les largeurs de bande des réseaux, les capacités de mémoire et les capacités de calcul ont été fortement augmentés durant cette période. Un client peut avoir un débit de plus de 100~mbps tandis qu'un autre peut utiliser une ligne à 56~kbps. Simultanément, un client peut avoir un poste de travail puissant, tandis que d'autres peuvent avoir juste un téléphone mobile. Au milieu de ces extrêmes, il y a des milliers de clients avec des capacités et des besoins très variables. De plus, les préférences d'un client doivent s'adapter à sa capacité, par exemple un client handicapé par sa largeur de bande peut être plus intéressé par une visualisation en temps réel sans interruption que d'avoir une haute résolution. Pour y faire face, des architectures hiérarchiques de codeurs vidéo ont été introduites afin de comprimer une seule fois, et de décomprimer de différentes manières. Comme la DCT n'a pas la fonctionnalité de multi-résolution, une architecture vidéo hiérarchique est conçue pour faire face aux défis des largeurs de bande et des puissances de traitement hétérogènes. Avec l'inondation des contenus numériques, qui peuvent être facilement copiés et modifiés, le besoin de la protection des contenus vidéo a pris plus d'importance. La protection de vidéos peut être réalisée avec l'aide de trois technologies : le tatouage de méta-données et l'insertion de droits d'auteur, le cryptage pour limiter l'accès aux personnes autorisées et la prise des empreintes digitales active pour le traçage de traître. L'idée principale dans notre travail est de développer des technologies de protection transparentes à l'utilisateur. Cela doit aboutir ainsi à un codeur vidéo modifié qui sera capable de coder et d'avoir un flux de données protégé. Puisque le contenu multimédia hiérarchique a déjà commencé à voir le jour, algorithmes pour la protection indépendante de couches d 'amélioration sont également proposées. / Field of image and video processing has got lot of attention during the last two decades. This field now covers a vast spectrum of applications like 3D TV, tele-surveillance, computer vision, medical imaging, compression, transmission and much more. Of particular interest is the revolution being witnessed by the first decade of twenty-first century. Network bandwidths, memory capacities and computing efficiencies have got revolutionized during this period. One client may have a 100~mbps connection whereas the other may be using a 56~kbps dial up modem. Simultaneously, one client may have a powerful workstation while others may have just a smart-phone. In between these extremes, there may be thousands of clients with varying capabilities and needs. Moreover, the preferences of a client may adapt to his capacity, e.g. a client handicapped by bandwidth may be more interested in real-time visualization without interruption than in high resolution. To cope with it, scalable architectures of video codecs have been introduced to 'compress once, decompress many ways' paradigm. Since DCT lacks the multi-resolution functionality, a scalable video architecture is designed to cope with challenges of heterogeneous nature of bandwidth and processing power. With the inundation of digital content, which can be easily copied and modified, the need for protection of video content has got attention. Video protection can be materialized with help of three technologies: watermarking for meta data and copyright insertion, encryption to restrict access to authorized persons, and active fingerprinting for traitor tracing. The main idea in our work is to make the protection technology transparent to the user. This would thus result in a modified video codec which will be capable of encoding and playing a protected bitstream. Since scalable multimedia content has already started coming to the market, algorithms for independent protection of enhancement layers are also proposed.
|
20 |
Adaptação de stream de vídeo em veículos aéreos não tripulados / Video stream adaptation on unmanned aerial vehiclesThiago Henrique Martinelli 24 September 2012 (has links)
Veículos Aéreos não tripulados (VANTs) vêm sendo cada vez mais utilizados em diversos países, tanto na área militar como na civil. O cenário considerado nesse estudo é o de um VANT realizando captura de vídeo em tempo real, transmitindo-o a uma base terrestre por meio de rede sem fio. O problema consiste no fato de não ser possível garantir uma taxa de transmissão contínua, com banda estável. Isso ocorre devido a fatores como a velocidade da aeronave (da ordem centenas de km/h), irregularidades de terreno (impedindo a linha de visada do enlace de transmissão), ou do clima, como tempestades que podem interferir na transmissão da RF. Por fim, os movimentos que o VANT pode realizar no vôo (Rolagem, Arfagem ou Guinada) podem prejudicar a disponibilidade do link. Dessa forma, é necessário que seja realizada adaptação de vídeo de acordo com a banda disponível. Assim, quando a qualidade do enlace for degradada, deverá ser realizada uma redução no tamanho do vídeo, evitando a interrupção na transmissão. Por outro lado, a adaptação também deverá fazer com que a banda disponível seja utilizada, evitando o envio de vídeos com qualidade inferior à que seria possível para determinado valor de largura de banda. Nesse trabalho será considerada a faixa de valores de largura de banda de 8 Mbps até zero. Para realizar a adaptação será utilizado o padrão H.264/AVC com codificação escalável / Unmanned Aerial Vehicles (UAVs) are being increasingly used in several countries, both in the military and civilian areas. In this study we consider an UAV equipped with a camera, capturing video for a real-time transmission to a ground-base using wireless network. The problem is that its not possible to ensure a continuous transmission rate, with stable bandwidth. That occurs due to factors like the speed of the aircraft, irregularities of terrain, or the weather (as storms, heat and fog, for instance, can interfere with RF transmission). Finally, the movements that the UAV can perform in flight (Roll, pitch and yaw) can impair link availability. Thus, it is necessary to perform an adaptation of video according to the available bandwidth. When the link quality is degraded, a reduction in the resolution of the video must be performed , avoiding interruption of the transmission. Additionally, adaptation must also provide that all the available bandwidth is used, avoiding sending the video with lower quality that would be possible for a given value bandwidth. In this work we propose a system which can vary the total amount of data being transmitted, by adjusting the compression parameters of the video. We manage to produce a system which uses the range from 8 Mbps up to zero. We use the H.264/AVC Codec, with scalable video coding
|
Page generated in 0.0299 seconds