• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 53
  • 17
  • 9
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 209
  • 130
  • 88
  • 60
  • 49
  • 48
  • 43
  • 41
  • 33
  • 27
  • 27
  • 24
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Implementação física de arquiteturas de hardware para a decodificação de vídeo digital segundo o padrão H.264/AVC / Physical implementation of hardware architectures for video decoding according to the H.264/AVC standard

Silva, Leandro Max de Lima January 2010 (has links)
Recentemente, o Brasil adotou o padrão SBTVD (Sistema Brasileiro de TV Digital) para transmissão de TV digital. Este utiliza o CODEC (codificador e decodificador) de vídeo H.264/AVC, que é considerado o estado-da-arte no contexto de compressão de vídeo digital. Esta transição para o SBTVD requer o desenvolvimento de tecnologia para transmissão, recepção e decodificação de sinais, assim, o projeto Rede H.264 SBTVD foi iniciado e tem como um dos objetivos a produção de componentes de hardware para construção de um set-top box SoC (System on Chip) compatível com o SBTVD. No sentido de produzir IPs (Intellectual Property) para codificação e decodificação de vídeo digital segundo o padrão H.264/AVC, várias arquiteturas de hardware vêm sendo desenvolvidas no âmbito do projeto. Assim, o objetivo deste trabalho consiste na realização da implementação física em ASIC (Application-Specific Integrated Circuit) de algumas destas arquiteturas de hardware para decodificação de vídeo H.264/AVC, entre elas as arquiteturas parser e decodificação de entropia, predição intra-quadro e, por fim, quantização e transformadas inversas, que juntas formam uma versão funcional de um decodificador de vídeo H.264 chamado de decodificador intra-only. Além destas, também foi fisicamente implementada uma arquitetura para o módulo filtro redutor de efeito de bloco e arquiteturas para os perfis Main e High de um compensador de movimentos. Nesta dissertação de mestrado, é apresentada a metodologia de implementação standard-cells (ASIC) utilizada, assim como uma descrição detalhada de cada passo executado para se chegar ao leiaute de cada uma das arquiteturas. Também são apresentados os resultados das implementações e realizadas algumas comparações com outras implementações de arquiteturas descritas na literatura. A implementação do filtro possui 43,9K portas lógicas (equivalent-gates), 42mW de potência e possui a menor quantidade de memória interna, 12,375KB SRAM, quando comparada com outras implementações para a mesma resolução de vídeo, 1920x1080@30fps. As implementações para os perfis Main e High do compensador de movimento apresentam a melhor relação entre a quantidade de ciclos de relógio necessária para interpolar um macrobloco (MB), 304 ciclos/MB, e a quantidade de equivalent-gates de cada implementação, 98K e 102K, respectivamente. Já a implementação do decodificador H.264 intra-only possui 5KB SRAM, 11,4mW de potência e apresenta a menor quantidade de equivalent-gates, 150K, comparado com outras implementações de decodificadores H.264 com características similares. / Recently Brazil has adopted the SBTVD (Brazilian Digital Television System) for digital TV transmission. It uses the H.264/AVC video CODEC (coder and decoder), which is considered the state of the art in the context of digital video compression. This transition to the SBTVD standard requires the development of technology for transmitting, receiving and decoding signals, so a project called Rede H.264 was initiated with the objective of producing cutting edge hardware components to build a set-top box SoC (System on Chip) compatible with the SBTVD. In order to produce IPs (Intellectual Property) for encoding and decoding digital video according to the H.264/AVC standard, many hardware architectures have been developed under the project. Therefore, the objective of this work is to carry out the physical implementation flow for ASIC (Application-Specific Integrated Circuit) in some of these hardware architectures for H.264/AVC video decoding, including the architectures parser and entropy decoding, intra-prediction and inverse quantization and transforms, which together compound a working version of an H.264 video decoder called intra-only. Besides these architectures, it is also physically implemented an architecture for a deblocking filter module and architectures for motion compensation according the Main and High profiles. This master thesis presents the standard-cells (ASIC) implementation as well as a detailed description of each step necessary to outcome the layouts of each of the architecture. It also presents the results of the implementations and comparisons with other works in the literature. The implementation of the filter has 43.9K gates (equivalent-gates), 42mW of power consumption and it demands the least amount of internal memory, 12.375KB SRAM, when compared with other implementations for the same video resolution, 1920x1080@30fps. The implementations for the Main and High profiles of the motion compensator have the best relationship between the amount of required clock cycles to interpolate a macroblock (MB), 304 cycles/MB, and the equivalent-gate count of each implementation, 98K and 102K, respectively. Also, the implementation of the H.264 intra-only decoder has 5KB SRAM, 11.4 mW of power consumption and it has the least equivalent-gate count, 150K, compared with other implementations of H.264 decoders which have similar features.
182

Projeto da arquitetura de hardware para binarização e modelagem de contextos para o CABAC do padrão de compressão de vídeo H.264/AVC / Hardware architecture design for binarization and context modeling for CABAC of H.264/AVC video compression

Martins, André Luis Del Mestre January 2011 (has links)
O codificador aritmético binário adaptativo ao contexto adotado (CABAC – Context-based Adaptive Binary Arithmetic Coding) pelo padrão H.264/AVC a partir de perfil Main é o estado-da-arte em termos de eficiência de taxa de bits. Entretanto, o CABAC ocupa 9.6% do tempo total de processamento e seu throughput é limitado pelas dependências de dados no nível de bit (LIN, 2010). Logo, atingir os requisitos de desempenho em tempo real nos níveis mais altos do padrão H.264/AVC se torna uma tarefa árdua em software, sendo necesário então, a aceleração do CABAC através de implementações em hardware. As arquiteturas de hardware encontradas na literatura para o CABAC focam no Codificador Aritmético Binário (BAE - Binary Arithmetic Encoder) enquanto que a Binarização e Modelagem de Contextos (BCM – Binarization and Context Modeling) fica em segundo plano ou nem é apresentada. O BCM e o BAE juntos constituem o CABAC. Esta dissertação descreve detalhadamente o conjunto de algoritmos que compõem o BCM do padrão H.264/AVC. Em seguida, o projeto de uma arquitetura de hardware específica para o BCM é apresentada. A solução proposta é descrita em VHDL e os resultados de síntese mostram que a arquitetura alcança desempenho suficiente, em FPGA e ASIC, para processar vídeos no nível 5 do padrão H.264/AVC. A arquitetura proposta é 13,3% mais rápida e igualmente eficiente em área que os melhores trabalhos relacionados nestes quesitos. / Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted in the H.264/AVC main profile is the state-of-art in terms of bit-rate efficiency. However, CABAC takes 9.6% of the total encoding time and its throughput is limited by bit-level data dependency (LIN, 2010). Moreover, meeting real-time requirement for a pure software CABAC encoder is difficult at the highest levels of the H.264/AVC standard. Hence, speeding up the CABAC by hardware implementation is required. The CABAC hardware architectures found in the literature focus on the Binary Arithmetic Encoder (BAE), while the Binarization and Context Modeling (BCM) is a secondary issue or even absent in the literature. Integrated, the BCM and the BAE constitute the CABAC. This dissertation presents the set of algorithms that describe the BCM of the H.264/AVC standard. Then, a novel hardware architecture design for the BCM is presented. The proposed design is described in VHDL and the synthesis results show that the proposed architecture reaches sufficiently high performance in FPGA and ASIC to process videos in real-time at the level 5 of H.264/AVC standard. The proposed design is 13.3% faster than the best works in these items, while being equally efficient in area.
183

Analysis of packet loss and delay variation on QoE for H.264 andWebM/VP8 Codecs / Analys av paketförlust och fördröjning variation på QoE för H.264 och WebM/VP8 Codecs

Alahari, Yeshwanth, Buddhiraja, Prashant January 2011 (has links)
The popularity of multimedia services over Internet has increased in the recent years. These services include Video on Demand (VoD) and mobile TV which are predominantly growing, and the user expectations towards the quality of videos are gradually increasing. Different video codec’s are used for encoding and decoding. Recently Google has introduced the VP8 codec which is an open source compression format. It is introduced to compete with existing popular codec namely H.264/AVC developed by ITU-T Video Coding Expert Group (VCEG), as by 2016 there will be a license fee for H.264. In this work we compare the performance of H.264/AVC and WebM/VP8 in an emulated environment. NetEm is used as an emulator to introduce delay/delay variation and packet loss. We have evaluated the user perception of impaired videos using Mean Opinion Score (MOS) by following the International Telecommunication Union (ITU) Recommendations Absolute Category Rating (ACR) and analyzed the results using statistical methods. It was found that both video codec’s exhibit similar performance in packet loss, But in case of delay variation H.264 codec shows better results when compared to WebM/VP8. Moreover along with the MOS ratings we also studied the effect of user feelings and online video watching experience impacts on their perception. / Yeshwanth Alahari Phone : +91-9986739097 Buddhiraja Prashant Phone : +46-734897359
184

Image coding with H.264 I-frames / Stillbildskodning med H.264 I-frames

Eklund, Anders January 2007 (has links)
In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence. / I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.
185

Impact of Packet Losses on the Quality of Video Streaming

Adebomi, OYEKANLU Emmanuel, Mwela, JOHN Samson January 2010 (has links)
In this thesis, the impact of packet losses on the quality of received videos sent across a network that exhibit normal network perturbations such as jitters, delays, packet drops etc has been examined. Dynamic behavior of a normal network has been simulated using Linux and the Network Emulator (NetEm). Peoples’ perceptions on the quality of the received video were used in rating the qualities of several videos with differing speeds. In accordance with ITU’s guideline of using Mean Opinion Scores (MOS), the effects of packet drops were analyzed. Excel and Matlab were used as tools in analyzing the peoples’ opinions which indicates the impacts that different loss rates has on the transmitted videos. Statistical methods used for evaluation of data are mean and variance. We conclude that people have convergence of opinions when losses become extremely high on videos with highly variable scene changes
186

Free Viewpoint TV

Hussain, Mudassar January 2010 (has links)
This thesis work regards free viewpoint TV. The main idea is that users can switch between multiple streams in order to find views of their own choice. The purpose is to provide fast switching between the streams, so that users experience less delay while view switching. In this thesis work we will discuss different video stream switching methods in detail. Then we will discuss issues related to those stream switching methods, including transmission and switching. We shall also discuss different scenarios for fast stream switching in order to make services more interactive by minimizing delays. Stream switching time varies from live to recorded events. Quality of service (QoS) is another factor to consider which can be improved by assigning priorities to the packets. We will discuss simultaneous stream transmission methods which are based on predictions and reduced quality streams for providing fast switching. We will present prediction algorithm for viewpoint prediction, propose system model for fast viewpoint switching and make evaluation of simultaneous stream transmission methods for free viewpoint TV. Finally, we draw our conclusions and propose future work. / Degree project
187

Multi-View Video Transmission over the Internet

Abdullah Jan, Mirza, Ahsan, Mahmododfateh January 2010 (has links)
3D television using multiple views rendering is receiving increasing interest. In this technology a number of video sequences are transmitted simultaneously and provides a larger view of the scene or stereoscopic viewing experience. With two views stereoscopic rendition is possible. Nowadays 3D displays are available that are capable of displaying several views simultaneously and the user is able to see different views by moving his head. The thesis work aims at implementing a demonstration system with a number of simultaneous views. The system will include two cameras, computers at both the transmitting and receiving end and a multi-view display. Besides setting up the hardware, the main task is to implement software so that the transmission can be done over an IP-network. This thesis report includes an overview and experiences of similar published systems, the implementation of real time video, its compression, encoding, and transmission over the internet with the help of socket programming and finally the multi-view display in 3D format.  This report also describes the design considerations more precisely regarding the video coding and network protocols.
188

HTTP Live Streaming : En studie av strömmande videoprotokoll

Swärd, Rikard January 2013 (has links)
Användningen av strömmande video ökar snabbt just nu. Ett populärt konceptär adaptive bitrate streaming som går ut på att en video kodas i flera olikabithastigheter. Dessa videor tas sedan och delas upp i små filer och görstillgänglig via internet. När du vill spela upp en sådan video laddar du först hemen fil som beskriver vart filerna finns och i vilka bithastigheter de är kodade i.Mediaspelaren kan där efter börja ladda hem filerna och spela upp dom. Om defysiska förutsättningarna, som exempelvis nedladdningshastighet eller CPUbelastning,ändras under uppspelningen kan mediaspelaren enkelt byta kvalitépå videon genom att börja ladda filer av en annan bithastighet och slippa attvideon laggar. Denna rapport tar därför en närmare titt på fyra tekniker inomadaptive bitrate streaming. De som undersöks är HTTP Live Streaming,Dynamic Adaptive Streaming over HTTP, HTTP Dynamic Streaming ochSmooth Streaming med avseende på vilka protokoll som dom använder.Rapporten undersöker även hur Apple och FFmpeg har implementerat HTTPLive streaming med avseende på hur mycket data som behövs läsas i en filinnan videon kan börja spelas upp. Rapporten visar att det inte är så storaskillnader mellan de fyra teknikerna. Dock sticker Dynamic AdaptiveStreaming over HTTP ut lite genom att vara helt oberoende av vilket ljud ellervideoprotokoll som används. Rapporten visar också på en brist i specificeringenav HTTP Live Streaming då det inte är specificerat att första komplettabildrutan i videoströmmen bör ligga i början av filen. I Apples implementationbehövs upp till 30 kB data läsas innan uppspelning kan påbörjas medan iFFmpegs implementation är det ca 600 byte. / The use of streaming video is growing rapidly at the moment. A popular conceptis adaptive bitrate streaming, which is when a video gets encoded in severaldifferent bit rates. These videos are then split into small files and made availablevia the internet. When you want to play such a video, you first download afile that describes where the files are located and in what bitrates they are encodedin. The media player then begin downloading the files and play them. Ifthe physical conditions, such as the download speed or CPU load, changes duringplayback, the media player can easily change the quality of the video bystarting to downloading files of a different bit rate and avoid that the video lags.This report will take a closer look at four techniques in adaptive bitrate streaming.They examined techniques are HTTP Live Streaming, Dynamic AdaptiveStreaming over HTTP, HTTP Dynamic Streaming and Smooth Streaming andwhich protocols they use. The report also examines how Apple and FFmpeg hasimplemented HTTP Live Streaming with respect to how much data is needed toread a file before the video can begin to be played. The report shows that thereare no large differences between the four techniques. However, Dynamic AdaptiveStreaming over HTTP stood out a bit by being completely independent ofany audio or video protocols. The report also shows a shortcoming in the specificationof HTTP Live Streaming as it is not specified that the first completeframe of the video stream should be at the beginning of the file. In Apple's implementationits needed to read up to 30 KB of data before playback can bestarted while in FFmpeg's implementation its about 600 bytes.
189

Enhancement of LTE Radio Access Protocols for Efficient Video Streaming

Tirouvengadam, Balaaji January 2012 (has links)
A drastic increase in traffic of mobile broadband is seen in the past few years, which is further accelerated by the increase in usage of smart phones and its applications. The availability of good smart phones and better data connectivity are encouraging mobile users to use video services. This huge increase in usage will pose a lot of challenges to the wireless networks. The wireless network has to become content aware in order to offer enhanced quality of video service through efficient utilization of the wireless spectrum. This thesis focuses on improving the Quality of Experience (QoE) for video transmission over Long Term Evolution (LTE) networks by imparting the content awareness to the system and providing unequal error protection for critical video packets. Two different schemes for the improvement of video quality delivery over LTE networks are presented in this thesis. Using content awareness, the retransmission count of Hybrid Automatic Repeat reQuest (HARQ) are changed dynamically such that the most important video frame gets more number of retransmission attempts, which increases its success for delivery in-turn increasing the received video quality. Since Radio Link Control (RLC) is the link layer for radio interface, the second approach focuses on optimizing this layer for efficient video transmission. As part of this scheme, a new operation mode called Hybrid Mode (HM) for RLC is defined. This mode performs retransmission only for the critical video frames, leaving other frames to unacknowledged transmission. The simulation results of both proposed schemes provide significant improvement in achieving good video quality without affecting the system performance.
190

Ruční dálkový ovladač pro robot Perseus / Operator's station for Perseus mobile robot

Sabó, Marek January 2019 (has links)
This thesis deals with the design and implementation of application for control of mobile robot. In the introductory section is discussed used platform GEARS-SMP, the principle of functionality of protocol for servo motors control, format M-JPEG and standard H.264. Further work is dedicated to analysis of designing user interface in robotic applications, available options for control devices and hardware used in remote controller. The following part focuses on the design of robotic application, especially on graphic user interface and virtual head-up display and follow-up implementation of created application in Raspberry Pi. In the end, thesis describes implemented software solution and compares resulting application with the created design.

Page generated in 0.02 seconds