• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 11
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 18
  • 13
  • 10
  • 8
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Self-Similarity of Images and Non-local Image Processing

Glew, Devin January 2011 (has links)
This thesis has two related goals: the first involves the concept of self-similarity of images. Image self-similarity is important because it forms the basis for many imaging techniques such as non-local means denoising and fractal image coding. Research so far has been focused largely on self-similarity in the pixel domain. That is, examining how well different regions in an image mimic each other. Also, most works so far concerning self-similarity have utilized only the mean squared error (MSE). In this thesis, self-similarity is examined in terms of the pixel and wavelet representations of images. In each of these domains, two ways of measuring similarity are considered: the MSE and a relatively new measurement of image fidelity called the Structural Similarity (SSIM) Index. We show that the MSE and SSIM Index give very different answers to the question of how self-similar images really are. The second goal of this thesis involves non-local image processing. First, a generalization of the well known non-local means denoising algorithm is proposed and examined. The groundwork for this generalization is set by the aforementioned results on image self-similarity with respect to the MSE. This new method is then extended to the wavelet representation of images. Experimental results are given to illustrate the applications of these new ideas.
12

Comparative analysis of DIRAC PRO-VC-2, H.264 AVC and AVS CHINA-P7

Kalra, Vishesh 07 July 2011 (has links)
Video codec compresses the input video source to reduce storage and transmission bandwidth requirements while maintaining the quality. It is an essential technology for applications, to name a few such as digital television, DVD-Video, mobile TV, videoconferencing and internet video streaming. There are different video codecs used in the industry today and understanding their operation to target certain video applications is the key to optimization. The latest advanced video codec standards have become of great importance in multimedia industries which provide cost-effective encoding and decoding of video and contribute for high compression and efficiency. Currently, H.264 AVC, AVS, and DIRAC are used in the industry to compress video. H.264 codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG). Audio-video coding standard (AVS) is a working group of audio and video coding standard in China. VC-2, also known as Dirac Pro developed by BBC, is a royalty free technology that anyone can use and has been standardized through the SMPTE as VC-2. H.264 AVC, Dirac Pro, Dirac and AVS-P2 are dedicated to High Definition Video, while AVS-P7 is to mobile video. Out of many standards, this work performs a comparative analysis for the H.264 AVC, DIRAC PRO/SMPTE-VC-2 and AVS-P7 standards in low bitrate region and high bitrate region. Bitrate control and constant QP are the methods which are employed for analysis. Evaluation parameters like Compression Ratio, PSNR and SSIM are used for quality comparison. Depending on target application and available bitrate, order of performance is mentioned to show the preferred codec.
13

Self-Similarity of Images and Non-local Image Processing

Glew, Devin January 2011 (has links)
This thesis has two related goals: the first involves the concept of self-similarity of images. Image self-similarity is important because it forms the basis for many imaging techniques such as non-local means denoising and fractal image coding. Research so far has been focused largely on self-similarity in the pixel domain. That is, examining how well different regions in an image mimic each other. Also, most works so far concerning self-similarity have utilized only the mean squared error (MSE). In this thesis, self-similarity is examined in terms of the pixel and wavelet representations of images. In each of these domains, two ways of measuring similarity are considered: the MSE and a relatively new measurement of image fidelity called the Structural Similarity (SSIM) Index. We show that the MSE and SSIM Index give very different answers to the question of how self-similar images really are. The second goal of this thesis involves non-local image processing. First, a generalization of the well known non-local means denoising algorithm is proposed and examined. The groundwork for this generalization is set by the aforementioned results on image self-similarity with respect to the MSE. This new method is then extended to the wavelet representation of images. Experimental results are given to illustrate the applications of these new ideas.
14

Kvalita obrazu a služeb v širokopásmových multimediálních sítích a systémech budoucnosti / Video and Data Services Quality in the Future Broadband Multimedia Systems and Networks

Kufa, Jan January 2018 (has links)
Téma doktorské práce je zaměřeno na analýzu zpracování signálů v širokopásmových multimediálních sítích a systémech budoucnosti, kde se předpokládají systémy s ultra vysokým rozlišením (UHDTV), vysokým snímkovým kmitočtem (HFR) a stereoskopické systémy (3D). Tyto systémy budou umožňovat vysoce účinnou zdrojovou kompresi obrazu, zvuku a dat a také jejich vysoce účinný přenos, a to jak při volném vysílání (např. DVB-T2), tak ve službách placené televize (např. IPTV). Cílem práce je analýza a vyhodnocení kvality obrazu a služeb v těchto systémech na základě objektivních metrik a subjektivních testů. Práce se dále zaměřuje na analýzu vnímané kvality u stereoskopické televize, kódovací účinnost moderních stereoskopických enkoderů a vlivu sekvencí na uživatelský komfort.
15

Program pro hodnocení kvality obrazu s využitím neuronové sítě / Program for evaluating image quality using neural network

Šimíček, Pavel January 2008 (has links)
This thesis studies the assessment of picture quality using the artificial neural network approach. In the first part, two main ways to evaluate the picture quality are described. It is the subjective assessment of picture quality, where a group of people watches the picture and evaluates its quality, and objective assessment which is based on mathematical relations. Calculation of structural similarity index (SSIM) is analyzed in detail. In the second part, the basis of neural networks is described. A neural network was created in Matlab, designed to simulate subjective assessment scores based on the SSIM index.
16

Pokročilé metody pro zabezpečení multimediálních dat / Advanced Methods to Multimedia Data Protection

Mikulčík, Ondřej January 2014 (has links)
To protect the the copyright of multimedia works have been developed watermarking techniques, that insert an invisible watermark to the original data. The aim of this thesis was to explore modern watermarking techniques, choose three of them and realize them. Also test them, evaluate their properties and possibly improve them. All methods insert a watermark into luminance component of the original image, and work with binary or black and white watermark. All techniques work in the frequency domain using discrete wavelet transform. For the implementation of methods, have been developed software named "Watermarking" that has been programmed in JAVA Version 7. The first chapter describes the types of watermarks, the general process of insertion and extraction, watermarking systems and important feature requests of embedded water- marks. In addition, qualitative methods are mentioned for their comparisons and testing. The chapter also contains a theoretical description of the used transformations and functions. In the second chapter is described the user interface of the software "Water- marking". Chapters three and four contain a theoretical description of the implemented methods and description of implementation of insertion and extraction processes of the watermark. Also there are discussed the exact procedures for testing, the sample data, and the results which are clearly displayed in the tables. The fifth chapter discusses in detail the results obtained in testing the robustness of the watermark, using the software StirMark. In the conclusion are evaluated the advantages and disadvantages of methods and quality parameters.
17

A Comparison of the Discrete Hermite Transform and Wavelets for Image Compression

Bellis, Christopher John 14 May 2012 (has links)
No description available.
18

Combined robust and fragile watermarking algorithms for still images : design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions

Jassim, Taha Dawood January 2014 (has links)
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
19

Low Complex Blind Video Quality Predictor based on Support Vector Machines

Pashike, Amitesh Kumar Singam and Venkat Raj Reddy January 2012 (has links)
Objective Video Quality Assessment plays a vital role in visual processing systems and especially in the mobile communication field, some of the video applications boosted the interest of robust methods for quality assessment. Out of all existing methods for Video Quality Analysis, No-Reference (NR) Video Quality Assessment is the one which is most needed in situations where the handiness of reference video is not available. Our challenge lies in formulating and melding effective features into one model based on human visualizing characteristics. Our research work explores the tradeoffs between quality prediction and complexity of a system. Therefore, we implemented support vector regression algorithm as NR-based Video Quality Metric(VQM) for quality estimation with simplified input features. The features are obtained from extraction of H.264 bitstream data at the decoder side of the network. Our metric predicted with Pearson correlation coefficient of 0.99 for SSIM, 0.98 for PEVQ, 0.96 for subjective score and 0.94 for PSNR metric. Therefore in terms of prediction accuracy, the proposed model has good correlation with all deployed metrics and the obtained results demonstrates the robustness of our approach. In our research work, the proposed metric has a good correlation with subjective scores which concludes that proposed metric can be employed for real time use, since subjective scores are considered as true or standard values of video quality.
20

Assessing Image Quality Impact of View Bypass in Cloud Rendering

Stephen A. Stamm (5930873) 15 May 2019
<p>The accessibility and flexibility of mobile devices make them an advantageous platform for gaming, but there are hardware limitations that impede the rendering of high-quality graphics. Rendering complex graphics on a mobile device typically results in a delayed image, also known as latency, and is a great discomfort for users of any real-time rendering experience. This study tests the image stream optimization View Bypass within a cloud gaming architecture, surpassing this imposing limitation by processing the high-quality game render on a remote computational server. A two sample for means test is performed to determine significance between two treatments: the control group without the View Bypass algorithm and the experimental group rendering with the View Bypass algorithm. A SSIM index score is calculated comparing the disparity between the remote server image output and the final mobile device image output after optimizations have been performed. This score indicates the overall image structural integrity difference between the two treatments and determines the quality and effectiveness of the tested algorithm.</p>

Page generated in 0.022 seconds