• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 11
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 18
  • 13
  • 10
  • 8
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Metody pro doplňování pixelů vně obrazu / Image extrapolation methods

Ješko, Petr January 2013 (has links)
The thesis deals with addition of pixels outside the image. Lists some methods for inpainting using computers and highlights the pitfalls that appear here. Examines methods for interpolation and approximation of functions in order to find the best method for extrapolating the image beyond its borders. Describes the basics of Wavelet transformation and Multiresolution analysis. It is proposed several methods for replenishment of pixels outside the image. PSNR and SSIM are used to compare achieved results. These methods are explained and compared. Briefly discusses the algorithm OMP, falling within the sparse representation of signals, and used in one of the methods. Also discussed is the development environment of MATLAB as a tool for the implementation of algorithms that practically solves the given problem. The practical part describes the implemented methods for adding pixels outside the image.
22

Využití pokročilých objektivních kritérií hodnocení při kompresi obrazu / Advanced objective measurement criteria applied to image compression

Šimek, Josef January 2010 (has links)
This diploma thesis deals with the problem of using an objective quality assessment methods in image data compression. Lossy compression always introduces some kind of distortion into the processed data causing degradation in the quality of the image. The intensity of this distortion can be measured using subjective or objective methods. To be able to optimize compression algorithms the objective criteria are used. In this work the SSIM index as a useful tool for describing the quality of compressed images has been presented. Lossy compression scheme is realized using the wavelet transform and SPIHT algorithm. The modification of this algorithm using partitioning of the wavelet coefficients into the separate tree-preserving blocks followed by independent coding, which is especially suitable for parallel processing, was implemented. For the given compression ratio the traditional problem is being solved – how to allocate available bits among the spatial blocks to achieve the highest possible image quality. The possible approaches to achieve this solution were discussed. As a result, some methods for bit allocation based on MSSIM index were proposed. To test the effectivity of these methods the MATLAB environment was used.
23

Metody pro doplňování pixelů vně obrazu / Image extrapolation methods

Ješko, Petr January 2013 (has links)
The thesis deals with addition of pixels outside the image. Lists some methods for inpainting using computers and highlights the pitfalls that appear here. Examines methods for interpolation and approximation of functions in order to find the best method for extrapolating the image beyond its borders. Describes the basics of Wavelet transformation and Multiresolution analysis and briefly discusses about spatial filtering, edge detection and the algorithm OMP, falling within the sparse representation of signals. Theoretical knowledge of these areas are used in the design of several methods for adding pixels outside the image. PSNR and SSIM are used to compare achieved results. Also discussed is the development environment of MATLAB as a tool for the implementation of algorithms that practically solves the given problem.
24

Měření kvality pro HEVC / Video Quality Measurement for HEVC

Klejmová, Eva January 2014 (has links)
This diploma thesis deals with standard objective and subjective video quality assessments and with analysis of their applicability to HEVC. Also basic description of video compression standard H.265/HEVC is presented. The main focus of the thesis is a creation of the database of compressed video sequences. Important parameters and features of the reference encoder HM-12 are discussed. Selected methods of objective video quality assessments are implemented on the created database. A part of this thesis is also a suggestion of method for objective video quality assessment, application of this method and associated data collection. Final data is statistically analyzed and it’s correlation with objective tests is discussed.
25

Analysis and Evaluation ofVisuospatial Complexity Models

Hammami, Bashar, Afram, Mjed January 2022 (has links)
Visuospatial complexity refers to the level of detail or intricacy present within a scene, takinginto account both spatial and visual properties of the dynamic scene or the place (e.g.moving images, everyday driving, video games and other immersive media). There havebeen several studies on measuring visual complexity from various viewpoints, e.g. marketing,psychology, computer vision and cognitive science. This research project aims atanalysing and evaluating different models and tools that have been developed to measurelow-level features of visuospatial complexity such as Structural Similarity Index measurement,Feature Congestion measurement of clutter and Subband Entropy measurement ofclutter. We use two datasets, one focusing on (reflectional) symmetry in static images,and another that consists of real-world driving videos. The results of the evaluation showdifferent correlations between the implemented models such that the nature of the sceneplays a significant role.
26

Combined robust and fragile watermarking algorithms for still images. Design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions.

Jassim, Taha D. January 2014 (has links)
This thesis deals with copyright protection and content authentication for still images. New blind transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform (DWT) were developed for copyright protection. The mobile number with international code is used as the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to embed the watermarking information. The watermarking information is embedded in the green channel of the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking information comparing to the host image size, the embedding process is repeated several times which resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi embedding process in order to avoid spatial correlation between the host image and the watermarking information. The effects of using one-level and two-level of DWT on the robustness and image quality have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure (SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images. Several grey and still colour images are used to test the new robust algorithms. The new algorithms offered better results in the robustness against different attacks such as JPEG compression, scaling, salt and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms. The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash function (MD5) as watermarking information embedded in the spatial domain. The new algorithm showed high sensitivity against any tampering on the watermarked images. The combined fragile and robust watermarking caused minimal distortion to the images. The combined scheme achieved both the copyright protection and content authentication.
27

Instrumentation Development for Site-Specific Prediction of Spectral Effects on Concentrated Photovoltaic System Performance

Tatsiankou, Viktar January 2014 (has links)
The description of a novel device to measure the spectral direct normal irradiance is presented. The solar spectral irradiance meter (SSIM) was designed at the University of Ottawa as a cost-effective alternative to a prohibitively expensive field spectroradiometer (FSR). The latter measures highly-varying and location-dependent solar spectrum, which is essential for accurate characterization of a concentrating photovoltaic system’s performance. The SSIM measures solar spectral irradiance in several narrow wavelength bands with a combination of photodiodes with integrated interference filters. This device performs spectral measurements at a fraction of the cost of a FSR, but additional post-processing is required to deduce the solar spectrum. The model was developed to take the SSIM’s inputs and reconstruct the solar spectrum in 280–4000 nm range. It resolves major atmospheric processes, such as air mass changes, Rayleigh scattering, aerosol extinction, ozone and water vapour absorptions. The SSIM was installed at the University of Ottawa’s CPV testing facility in September, 2013. The device gathered six months of data from October, 2013 to March, 2014. The mean difference between the SSIM and the Eppley pyrheliometer was within ±1.5% for cloudless periods in October, 2013. However, interference filter degradation and condensation negatively affected the performance of the SSIM. Future design changes will improve the longterm reliability of the next generation SSIMs.
28

Komprese obrazu pomocí vlnkové transformace / Image Compression Using the Wavelet Transform

Bradáč, Václav January 2017 (has links)
This work deals with image compression using wavelet transformation. At the beginning , you can find theoretical information about the best known techniques used for image compression , a thorough description of wavelet transormation and the EBCOT algorithm. A significant part of the work is devoted to the library's own implementation . Another chapter of the diploma thesis deals with the comparison and evaluation of the achieved results of the processed library with the JPEG2000 format
29

Porovnání objektivních a subjektivních metrik kvality videa pro Ultra HDTV videosekvence / Comparison of objective and subjective video quality metrics for Ultra HDTV sequences

Bršel, Boris January 2016 (has links)
Master's thesis deals with the assessment of quality of Ultra HDTV video sequences applying objective metrics. Thesis theoretically describes coding of selected codecs H.265/HEVC and VP9, objective video quality metrics and also subjective methods for assessment of the video sequences quality. Next chapter deals with the implementation of the H.265/HEVC and the VP9 codecs at selected video sequences in the raw format from which arises the test sequences database. Quality of these videos is measured afterwards by objective metrics and selected subjective method. These results are compared for the purpose of finding the most consistent correlations among objective metrics and subjective assessment.
30

Alternativní JPEG kodér/dekodér / An alternative JPEG coder/decoder

Jirák, Jakub January 2017 (has links)
The JPEG codec is currently the most widely used image format. This work deals with the design and implementation of an alternative JPEG codec using proximal algorithms in combination with the fixation of points from the original image to suppression of artifacts created in common JPEG coding. To solve the problem, the prox_TV and then the Douglas-Rachford algorithm were used, for which special functions using l_1-norm for image reconstruction were derived. The results of the proposed solution are very good because they can effectively suppress the artefacts created and the result corresponds to the image with a higher set qualitative factor. The proposed method achieves very good results for both simple images and photos, but in the case of large images (1024 × 1024 px) and larger, a large amount of computing time is required, so the method is more suitable for smaller images.

Page generated in 0.024 seconds