• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 11
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 18
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Implementace metriky pro hodnocení kvality videosekvencí do dekodéru H.264/AVC / Implementing a Video Quality Metric in the H.264/AVC Decoder

Grúbel, Michal January 2010 (has links)
In this diploma thesis an algorithm for the evaluation of picture quality of H.264-coded video sequences is introduced and applied. As a measure of picture quality objective metric the peak signal to noise ratio (PSNR) is used. While the computation of the PSNR usually requires a reference signal and compares it to the distorted video sequence, this algorithm is able to evaluate PSNR following the coded transform coefficients. Thus, no reference signal is needed.
2

Dynamic 3D-Torrent Assembly for Bit-Rate Adjustments in P2P Video Streaming

Lin, Ching-Chen 27 August 2010 (has links)
In this Thesis, we propose a mechanism to dynamically adjust the video bit rates through the segmentation and the reassembly of SVC (Scalable Video Coding) segments in a P2P network. At the transmitter, an SVC film is segmented into a number of segments with different sizes. Each segment is further chopped into Torrents according to three scalabilities of SVC (Temporal, Quality, and Spatial). The Torrents with three scalabilities are referred to as 3D-Torrents. At the receiver, we present three approaches of grabbing Torrents (Temporal-First, Quality-First and Interleaving) form P2P networks to validate that the proposed 3D-Torrent reassembly can adapt to different bandwidths and to fit different hardware equipments so that any possible video freeze-up time can be avoided. To demonstrate how the proposed 3D-Torrent reassembly affect video bit rates in the P2P video streaming environment, we implement the segmentation, grabbing, and reassembly of Torrents on a Linux platform. In the P2P network built by Hadoop, we study (i) the video freeze-up time with/without adopting 3D-Torrent reassembly, (ii) video quality under different grabbing approaches using two different types of video, static and active background. To compare the video quality at the transmitter to that at the receiver, we modify the conventional PSNR equation. Two new dimensions, Temporal and Spatial, are included in the new PSNR3D equation to compare the video quality between the transmitter and the receiver. From the experimental results, we observe that the freeze-up time approaches zero using the 3D-Torrent reassembly and video bit rates can be dynamically adjusted according to the available bandwidth.
3

Comparação da transformada wavelet discreta e da transformada do cosseno, para compressão de imagens de impressão digital / Comparison of the discrete transform cosine and the discrete wavelet transform for image of compression of fingerprint

Reigota, Nilvana dos Santos 27 February 2007 (has links)
Este trabalho tem por objetivo comparar os seguintes métodos de compressão de imagens de impressão digital: transformada discreta do cosseno (DCT), transformada de wavelets de Haar, transformada de wavelets de Daubechies e transformada de wavelets de quantização escalar (WSQ). O propósito da comparação é identificar o método que resulta numa menor perda de dados, para a maior taxa de compressão possível. São utilizadas as seguintes métricas para avaliação da qualidade da imagem para os métodos: erro quadrático médio (ERMS), a relação sinal e ruído (SNR) e a relação sinal ruído de pico (PSNR). Para as métricas utilizadas a DCT apresentou os melhores resultados, seguida pela WSQ. No entanto, o melhor tempo de compressão e a melhor qualidade das imagens recuperadas avaliadas pelo software GrFinger 4.2, foram obtidos com a técnica WSQ. / This research aims to compare the following fingerprint image compression methods: the discrete cosseno transform (DCT), Haar wavelet transform, Daubechies wavelets transform and wavelet scalar quantization (WSQ). The main interest is to find out the technique with the smallest distortion and higher compression ratio. Image quality is measured using peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) and root mean square (ERMS). Image quality using these metrics showed best results for the DCT followed by WSQ, although the WSQ had the best compression time and presented the best quality when evaluated by the GrFinger 4.2 software.
4

Comparação da transformada wavelet discreta e da transformada do cosseno, para compressão de imagens de impressão digital / Comparison of the discrete transform cosine and the discrete wavelet transform for image of compression of fingerprint

Nilvana dos Santos Reigota 27 February 2007 (has links)
Este trabalho tem por objetivo comparar os seguintes métodos de compressão de imagens de impressão digital: transformada discreta do cosseno (DCT), transformada de wavelets de Haar, transformada de wavelets de Daubechies e transformada de wavelets de quantização escalar (WSQ). O propósito da comparação é identificar o método que resulta numa menor perda de dados, para a maior taxa de compressão possível. São utilizadas as seguintes métricas para avaliação da qualidade da imagem para os métodos: erro quadrático médio (ERMS), a relação sinal e ruído (SNR) e a relação sinal ruído de pico (PSNR). Para as métricas utilizadas a DCT apresentou os melhores resultados, seguida pela WSQ. No entanto, o melhor tempo de compressão e a melhor qualidade das imagens recuperadas avaliadas pelo software GrFinger 4.2, foram obtidos com a técnica WSQ. / This research aims to compare the following fingerprint image compression methods: the discrete cosseno transform (DCT), Haar wavelet transform, Daubechies wavelets transform and wavelet scalar quantization (WSQ). The main interest is to find out the technique with the smallest distortion and higher compression ratio. Image quality is measured using peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) and root mean square (ERMS). Image quality using these metrics showed best results for the DCT followed by WSQ, although the WSQ had the best compression time and presented the best quality when evaluated by the GrFinger 4.2 software.
5

Modèles d'interpolation spatiale et spectrale, anti-crénelage et mesures de qualité

Horé, Alain January 2011 (has links)
Cette thèse à publications décrit plusieurs travaux en imagerie, aussi bien au niveau de l'acquisition des images que du post-traitement des images. Le premier concerne un algorithme de redimensionnement d'images dans lequel le pixel n'est pas considéré comme un point, mais comme une unité surfacique exprimée par une fonction mathématique. L'intensité d'un pixel est déterminée par interpolation à l'aide des outils du calcul intégral. Le deuxième travail concerne un autre algorithme de redimensionnement d'images dans lequel les dérivées de l'image sont mises à contribution pour augmenter le contraste et rehausser les hautes fréquences lors du processus de redimensionnement. Pour combiner l'image et ses dérivées, nous utilisons le théorème d'échantillonnage généralisé de Papoulis. Dans ce deuxième travail et dans la suite de la thèse, le pixel est considéré comme un point. Dans le troisième travail, nous proposons une équation de diffusion aux dérivées partielles afin de réduire le crénelage qui apparaît régulièrement dans de nombreux algorithmes de redimensionnement d'images. L'équation que nous proposons résulte d'un raffinement de l'équation de diffusion de la chaleur utilisée par Perona et Malik. Pour cela, nous introduisons la diffusivité inverse afin de réduire considérablement le crénelage sur les contours nets. Le rehaussement du contraste pendant le processus de diffusion se fait par l'intégration d'un filtre passe-haut, en l'occurrence le Laplacien, dans notre équation de diffusion. Un modèle de réduction efficace du crénelage sur les lignés, basé sur les valeurs propres de la matrice hessienne, est également proposé. Le quatrième travail est un algorithme de dématriçage (ou demosaïçage) permettant de reconstruire une image couleur à partir d'une image acquise par une matrice de filtres couleurs (color filter array, CFA). Sachant que dans un CFA une seule couleur primaire rouge, vert ou bleu est disponible à chaque position de pixel, nous proposons un modèle d'interpolation permettant d'estimer les couleurs manquantes à chaque position de pixel. Notre algorithme peut être utilisé pour divers modèles de CFA. Il s'inspire de l'algorithme de dématriçage universel de Lukac et al. et y apporte diverses améliorations. La première amélioration est la mise en oeuvre d'une détection de contours ou de zones uniformes dans une image acquise d'un CFA. La deuxième amélioration concerne l'utilisation complète du modèle de différence des couleurs, qui est un modèle bien connu dans les algorithmes de dématriçage. La troisième amélioration est l'utilisation d'un modèle d'interpolation spectrale permettant d'interpoler la couleur d'un pixel à l'aide de la couleur et de la position de ses voisins. Dans le cinquième et dernier travail, nous abordons une problématique liée à la qualité des images, notion importante en imagerie pour la validation des algorithmes et des modèles. Dans notre travail, nous faisons une étude analytique et expérimentale pour comparer le PSNR (Peak Signal-to-Noise Ratio) et le SSIM (Structural Similarity Index Measure), qui sont deux mesures de qualité largement utilisés en traitement d'images. L'étude analytique fait ressortir l'existence d'une relation de type logarithmique entre ces deux mesures. Les nombreux tests expérimentaux réalisés avec différentes images donnent davantage de précisions sur l'efficacité de ces deux mesures à évaluer la qualité des images ayant subi certaines dégradations ou traitements tels que la compression Jpeg, la compression Jpeg 2000, le flou gaussien ou le bruit additif gaussien.
6

Efficient compression of synthetic video

Mazhar, Ahmad Abdel Jabbar Ahmad January 2013 (has links)
Streaming of on-line gaming video is a challenging problem because of the enormous amounts of video data that need to be sent during game playing, especially within the limitations of uplink capabilities. The encoding complexity is also a challenge because of the time delay while on-line gamers are communicating. The main goal of this research study is to propose an enhanced on-line game video streaming system. First, the most common video coding techniques have been evaluated. The evaluation study considers objective and subjective metrics. Three widespread video coding techniques are selected and evaluated in the study; H.264, MPEG-4 Visual and VP- 8. Diverse types of video sequences were used with different frame rates and resolutions. The effects of changing frame rate and resolution on compression efficiency and viewers' satisfaction are also presented. Results showed that the compression process and perceptual satisfaction are severely affected by the nature of the compressed sequence. As a result, H.264 showed higher compression efficiency for synthetic sequences and outperformed other codecs in the subjective evaluation tests. Second, a fast inter prediction technique to speed up the encoding process of H.264 has been devised. The on-line game streaming service is a real time application, thus, compression complexity significantly affects the whole process of on-line streaming. H.264 has been recommended for synthetic video coding by our results gained in codecs comparative studies. However, it still suffers from high encoding complexity; thus a low complexity coding algorithm is presented as fast inter coding model with reference management technique. The proposed algorithm was compared to a state of the art method, the results showing better achievement in time and bit rate reduction with negligible loss of fidelity. Third, recommendations on tradeoff between frame rates and resolution within given uplink capabilities are provided for H.264 video coding. The recommended tradeoffs are offered as a result of extensive experiments using Double Stimulus Impairment Scale (DSIS) subjective evaluation metric. Experiments showed that viewers' satisfaction is profoundly affected by varying frame rates and resolutions. In addition, increasing frame rate or frame resolution does not always guarantee improved increments of perceptual quality. As a result, tradeoffs are recommended to compromise between frame rate and resolution within a given bit rate to guarantee the highest user satisfaction. For system completeness and to facilitate the implementation of the proposed techniques, an efficient game video streaming management system is proposed. Compared to existing on-line live video service systems for games, the proposed system provides improved coding efficiency, complexity reduction and better user satisfaction.
7

Performance Comparison of Image Enhancement Algorithms Evaluated on Poor Quality Images

Kotha, Aravind Eswar Ravi Raja, Majety, Lakshmi Ratna Hima Rajitha January 2017 (has links)
Many applications require automatic image analysis for different quality of the input images. In many cases, the quality of acquired images is suitable for the purpose of the application. However, in some cases the quality of the acquired image has to be modified according to needs of a specific application. A higher quality of the image can be achieved by Image Enhancement (IE) algorithms. The choice of IE technique is challenging as this choice varies with the application purpose. The goal of this research is to investigate the possibility of the selective application for the IE algorithms. The values of entropy and Peak Signal to Noise Ratio (PSNR) of the acquired image are considered as parameters for selectivity. Three algorithms such as Retinex, Bilateral filter and Bilateral tone adjustment have been chosen as IE techniques for evaluation in this work. Entropy and PSNR are used for the performance evaluation of selected IE algorithms. In this study, we considered the images from three fingerprint image databases as input images to investigate the algorithms. The decision to enhance an image in these databases by the considered algorithms is based on the empirically evaluated entropy and PSNR thresholds. Automatic Fingerprint Identification System (AFIS) has been selected as the application of interest. The evaluation results show that the performance of the investigated IE algorithms affects significantly the performance of AFIS. The second conclusion is that entropy and PSNR might be considered as indicators for required IE of the input image for AFIS.
8

2D SPECTRAL SUBTRACTION FOR NOISE SUPPRESSION IN FINGERPRINT IMAGES

Dandu, Sai Venkata Satya Siva Kumar, Kadimisetti, Sujit January 2017 (has links)
Human fingerprints are rich in details called the minutiae, which can be used as identification marks for fingerprint verification. To get the details, the fingerprint capturing techniques are to be improved. Since when we the fingerprint is captured, the noise from outside adds to it. The goal of this thesis is to remove the noise present in the fingerprint image. To achieve a good quality fingerprint image, this noise has to be removed or suppressed and here it is done by using an algorithm or technique called ’Spectral Subtraction’, where the algorithm is based on subtraction of estimated noise spectrum from noisy signal spectrum. The performance of the algorithm is assessed by comparing the original fingerprint image and image obtained after spectral subtraction several parameters like PSNR, SSIM and also for different fingerprints on the database. Finally, performance matching was done using NIST matching software, and the obtained results were presented in the form of Receiver Operating Characteristics (ROC)graphs, using MATLAB, and the experimental results were presented.
9

Modeling and Evaluating Feedback-Based Error Control for Video Transfer

wang, yubing 24 October 2008 (has links)
"Packet loss can be detrimental to real-time interactive video over lossy networks because one lost video packet can propagate errors to many subsequent video frames due to the encoding dependency between frames. Feedback-based error control techniques use feedback information from the decoder to adjust coding parameters at the encoder or retransmit lost packets to reduce the error propagation due to data loss. Feedback-based error control techniques have been shown to be more effective than trying to conceal the error at the encoder or decoder alone since they allow the encoder and decoder to cooperate in the error control process. However, there has been no systematic exploration of the impact of video content and network conditions on the performance of feedback-based error control techniques. In particular, the impact of packet loss, round-trip delay, network capacity constraint, video motion and reference distance on the quality of videos using feedback-based error control techniques have not been systematically studied. This thesis presents analytical models for the major feedback-based error control techniques: Retransmission, Reference Picture Selection (both NACK and ACK modes) and Intra Update. These feedback-based error control techniques have been included in H.263/H.264 and MPEG4, the state of the art video in compression standards. Given a round-trip time, packet loss rate, network capacity constraint, our models can predict the quality for a streaming video with retransmission, Intra Update and RPS over a lossy network. In order to exploit our analytical models, a series of studies has been conducted to explore the effect of reference distance, capacity constraint and Intra coding on video quality. The accuracy of our analytical models in predicting the video quality under different network conditions is validated through simulations. These models are used to examine the behavior of feedback-based error control schemes under a variety of network conditions and video content through a series of analytic experiments. Analysis shows that the performance of feedback-based error control techniques is affected by a variety of factors including round-trip time, loss rate, video content and the Group of Pictures (GOP) length. In particular: 1) RPS NACK achieves the best performance when loss rate is low while RPS ACK outperforms other repair techniques when loss rate is high. However RPS ACK performs the worst when loss rate is low. Retransmission performs the worst when the loss rate is high; 2) for a given round-trip time, the loss rate where RPS NACK performs worse than RPS ACK is higher for low motion videos than it is for high motion videos; 3) Videos with RPS NACK always perform the same or better than videos without repair. However, when small GOP sizes are used, videos without repair perform better than videos with RPS ACK; 4) RPS NACK outperform Intra Update for low-motion videos. However, the performance gap between RPS NACK and Intra Update drops when the round-trip time or the intensity of video motion increases. 5) Although the above trends hold for both VQM and PSNR, when VQM is the video quality metric the performance results are much more sensitive to network loss. 6) Retransmission is effective only when the round-trip time is low. When the round-trip time is high, Partial Retransmission achieves almost the same performance as Full Retransmission. These insights derived from our models can help determine appropriate choices for feedback-based error control techniques under various network conditions and video content. "
10

Comparative analysis of DIRAC PRO-VC-2, H.264 AVC and AVS CHINA-P7

Kalra, Vishesh 07 July 2011
Video codec compresses the input video source to reduce storage and transmission bandwidth requirements while maintaining the quality. It is an essential technology for applications, to name a few such as digital television, DVD-Video, mobile TV, videoconferencing and internet video streaming. There are different video codecs used in the industry today and understanding their operation to target certain video applications is the key to optimization. The latest advanced video codec standards have become of great importance in multimedia industries which provide cost-effective encoding and decoding of video and contribute for high compression and efficiency. Currently, H.264 AVC, AVS, and DIRAC are used in the industry to compress video. H.264 codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG). Audio-video coding standard (AVS) is a working group of audio and video coding standard in China. VC-2, also known as Dirac Pro developed by BBC, is a royalty free technology that anyone can use and has been standardized through the SMPTE as VC-2. H.264 AVC, Dirac Pro, Dirac and AVS-P2 are dedicated to High Definition Video, while AVS-P7 is to mobile video. Out of many standards, this work performs a comparative analysis for the H.264 AVC, DIRAC PRO/SMPTE-VC-2 and AVS-P7 standards in low bitrate region and high bitrate region. Bitrate control and constant QP are the methods which are employed for analysis. Evaluation parameters like Compression Ratio, PSNR and SSIM are used for quality comparison. Depending on target application and available bitrate, order of performance is mentioned to show the preferred codec.

Page generated in 0.0166 seconds