• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Error Resilient Coding Using Flexible Macroblock Ordering In Wired And Wireless Communications

Demirtas, Ali Murat 01 September 2008 (has links) (PDF)
Error Resilient Coding tools are the methods to avoid or reduce the amount of corruption in video by altering the encoding algorithm. One of them is Flexible Macroblock Ordering (FMO) which provides us with ordering macroblocks of the frames flexibly. Six of them have definite ordering pattern and the last one, called explicit type, can get any order. In this thesis two explicit type algorithms, one of which is new, are explained and the performance of different FMO types in wired and wireless communication are evaluated. The first algorithm separates the important blocks into separate packets, so it equalizes the importance of packets. The proposed method allocates the important macroblocks according to a checkerboard pattern and employs unequal error protection to protect them more. The simulations are performed for wired and wireless communication and Forward Error Correction is used in the second stage of the simulations. Lastly the results of the new algorithms are compared with the performance of the other FMO types. According to the simulations the Proposed algorithm performs better than others when the error rate is very high and FEC is employed.
2

Error resilience for video coding services over packet-based networks

Zhang, Jian, Electrical Engineering, Australian Defence Force Academy, UNSW January 1999 (has links)
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
3

Ανάπτυξη αρχιτεκτονικών διπλού φίλτρου και FPGA υλοποιήσεις για το H.264 / AVC deblocking filter

Καβρουλάκης, Νικόλαος 07 June 2013 (has links)
Αντικείμενο της παρούσας διπλωματικής εργασίας είναι η παρουσίαση και η μελέτη ενος εναλλακτικού σχεδιασμού του deblocking φίλτρου του προτύπου κωδικοποίησης βίντεο Η.264. Αρχικά επεξηγείται αναλυτικά ο τρόπος λειτουργίας του φίλτρου και στη συνέχεια προτείνεται ένας πρωτοποριακός σχεδιασμός με χρήση pipeline πέντε σταδίων. Ο σχεδιασμός παρουσιάζει σημαντικά πλεονεκτήματα στον τομέα της ταχύτητας (ενδεικτικά εμφανίζεται βελτιωμένη απόδοση στην συχνότητα λειτουργίας και στο throughput). Αυτό πιστοποιήθηκε από μετρήσεις που έγιναν σε συγκεκριμένα fpga και επαλήθευσαν τα θεωρητικά συμπεράσματα που είχαν εξαχθεί. / The standard H.264 (or else MPEG-4 part 10) is nowadays the most widely used standard in the area of video coding as it is supported by the largest enterprises in the internet (including Google, Apple and Youtube). Its most important advantage over the previous standards is that it achieves better bitrate without falling in terms of quality. A crucial part of the standard is the deblocking filter which is applied in each macroblock of a frame so that it reduces the blocking distortion. The filter accounts for about one third of the computational requirements of the standard, something which makes it a really important part of the filtering process. The current diploma thesis presents an alternative design of the filter which achieves better performance than the existing ones. The design is based in the use of two filters (instead of one used in current technology) and moreover, in the application of a pipelined design in each filter. By using a double filter, exploitation of the independence which exists in many parts of the macroblock is achieved. That is to say, it is feasible that different parts of it can be filtered at the same time without facing any problems. Furthermore, the use of the pipeline technique importantly increases the throughput. Needless to say, in order for the desired result to be achieved, the design has to be made really carefully so that the restrictions imposed by the standard will not be failed. The use of this alternative filter design will result in an important raise in the performance. Amongst all, the operating frequency, the throughput and the quality of the produced video will all appear to be considerably risen. It also needs to be mentioned that the inevitable increase of the area used (because of the fact that two filters are used instead of one) is not really important in terms of cost. The structure of the thesis is described in this paragraph. In chapter 1 there is a rather synoptic description of the H.264 standard and the exact position of the deblocking filter in the whole design is clarified. After that, the algorithmic description of the filter follows (Chapter 2). In this chapter, all the parameters participating in the filter are presented in full detail as well as the equations used during the process. In the next chapter (chapter 3), the architecture chosen for the design is presented. That is to say, the block diagram is presented and explained, as well as the table of timings which explains completely how the filter works. The pipelining technique applied in the filter is also analyzed and justified in this chapter. In the next chapter (chapter 4), every structural unit used in the current architecture is analyzed completely and its role in the whole structure is presented. Finally, in chapter 5, the results of the measurements made in typical fpgas of Altera and Xilinx are presented. The results are shown in table format whereas for specific parameters diagrams were used so that the improved performance of the current design compared to the older ones that are widely used, becomes evident.
4

Analýza kvality obrazu v digitálních televizních systémech / Picture Quality Analysis in Digital Television Systems

Bednarz, Robin January 2009 (has links)
Diploma thesis deals with the analysis of quality in digital television systems and contains theoretical description of subjective and objective assessment of quality picture methods. The thesis contains short-term and long-term analysis of quality picture of terrestrial television DVB-T. Measurements and experimentations were carried out with the help of Rohde&Schwarz DVQ analyzer of picture quality and software MPEG-2 Quality Monitor and MPEG-2 Elementary stream analyzer.

Page generated in 0.0363 seconds