• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
2

Quality Assessment for Halftone Images

Elmèr, Johnny January 2023 (has links)
Halftones are reproductions of images created through the process of halftoning. The goal of halftones is to create a replica of an image which, at a distance, looks nearly identical to the original. Several different methods for producing these halftones are available, three of which are error diffusion, DBS and IMCDP. To check whether a halftone would be perceived as of high quality there are two options: Subjective image quality assessments (IQA’s) and objective image quality (IQ) measurements. As subjective IQA’s often take too much time and resources, objective IQ measurements are preferred. But as there is no standard for which metric should be used when working with halftones, this brings the question of which one to use. For this project both online and on-location subjective testing was performed where observers were tasked with ranking halftoned images based on perceived image quality, the images themselves being chosen specifically to show a wide range of characteristics such as brightness and level of detail. The results of these tests were compiled and then compared to that of eight different objective metrics, the list of which is the following: MSE, PSNR, S-CIELAB, SSIM, BlurMetric, BRISQUE, NIQE and PIQE. The subjective and objective results were compared using Z-scores and showed that SSIM and NIQE were the objective metrics which most closely resembled the subjective results. The online and on-location subjective tests differed greatly for dark colour halftones and colour halftones containing smooth transitions, with a smaller variation for the other categories chosen. What did not change was the clear preference for DBS by both the observers and the objective IQ metrics, making it the better of the three methods tested. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
3

Image Dynamic Range Enhancement

Ozyurek, Serkan 01 September 2011 (has links) (PDF)
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the objective quality metrics and subjective ratings is studied in the experiments.
4

Metodický přístup k evaluaci výpočtů transportu světla / A Methodical Approach to the Evaluation of Light Transport Computations

Tázlar, Vojtěch January 2020 (has links)
Photorealistic rendering has a wide variety of applications, and so there are many rendering algorithms and their variations tailored for specific use cases. Even though practically all of them do physically-based simulations of light transport, their results on the same scene are often different - sometimes because of the nature of a given algorithm or in a worse case because of bugs in their implementation. It is difficult to compare these algorithms, especially across different rendering frameworks, because there is not any standardized testing software or dataset available. Therefore, the only way to get an unbiased comparison of algorithms is to create and use your dataset or reimplement the algorithms in one rendering framework of choice, but both solutions can be difficult and time-consuming. We address these problems with our test suite based on a rigorously defined methodology of evaluation of light transport algorithms. We present a scripting framework for automated testing and fast comparison of rendering results and provide a documented set of non-volumetric test scenes for most popular research-oriented render- ing frameworks. Our test suite is easily extensible to support additional renderers and scenes. 1
5

Structural Characterization of Fibre Foam Materials Using Tomographic Data

Satish, Shwetha January 2024 (has links)
Plastic foams, such as Styrofoam, protect items during transport. Recognising the recycling challenges of these foams, there's a growing interest in developing alternatives from renewable resources, particularly cellulose fibres, for packaging. A deep understanding of its structure, specifically achieving a uniform distribution of small pore sizes, is crucial to enhancing the mechanical properties of the foam. Prior works highlight the need for improvement in X-ray techniques and image-processing techniques to address challenges in data acquisition and analysis. In this study, X-ray Microtomography equipment was used to capture images of the fibre foam sample, and software like XMController and XMReconstructor obtained 2D projection images at different magnifications (2X, 4X, 10X, and 20X). ImageJ and Python algorithms were then used to distinguish pores and fibres from the obtained images and characterize the pores, which included Bilateral filtering, that helped reduce background noise and preserve fibres in the grayscale images. The Threshold Otsu method converted the grayscale image to a binary image, and the inverted binary image aided in Local thickness image formation. The Local thickness image represented fibres with pixel value zero and blown-up spheres of different intensities representing the pores and their characteristics. As the magnification of the Local thickness images increased, the Pore Area, Pore Volume, Pore Perimeter, and Total Pores decreased, indicating a shift towards a more uniform distribution of smaller pores. Histograms, scatter plots, and pore intensity distribution histograms visually represented this trend. Similarly, characteristics like pore density increased, porosity decreased, and specific surface area remained constant with increasing magnification, suggesting a more compact structure. Objective measurements of image quality metrics, such as PSNR, RMSE, SSIM, and NCC, were used. Grayscale images of different magnifications were compared, and it was noted that as the number of projections increased, the 10X vs. 20X and 2X vs. 4X pairs consistently performed well in terms of image quality. The applied methodologies, comprising Pore Analysis and Image Quality Metrics, exhibit significant strengths in characterising porous structures and evaluating image quality. / Plastskum, som frigolit, skyddar föremål under transport. Att känna igenåtervinningsutmaningar för dessa skum, finns det ett växande intresse för att utveckla alternativ frånförnybara resurser, särskilt cellulosafibrer, för förpackningar. En djup förståelse för detstruktur, specifikt att uppnå en enhetlig fördelning av små porstorlekar, är avgörande förförbättring av skummets mekaniska egenskaper. Tidigare arbeten belyser behovet avförbättring av röntgentekniker och bildbehandlingstekniker för att möta utmaningar idatainsamling och analys. I denna studie användes röntgenmikrotomografiutrustning för attta bilder av fiberskumprovet och programvara som XMController ochXMReconstructor erhöll 2D-projektionsbilder med olika förstoringar (2X, 4X, 10X,och 20X). ImageJ och Python-algoritmer användes sedan för att skilja porer och fibrer frånde erhållna bilderna och karakterisera porerna, vilket inkluderade bilateral filtrering, som hjälpteminska bakgrundsbrus och bevara fibrer i gråskalebilderna. The Threshold Otsumetoden konverterade gråskalebilden till en binär bild, och den inverterade binära bilden hjälpte tilli lokal tjocklek bildbildning. Den lokala tjockleksbilden representerade fibrer med pixelvärde noll och uppblåsta sfärer med olika intensitet som representerar porerna och derasegenskaper. När förstoringen av bilderna med lokal tjocklek ökade, ökade porområdet,Porvolym, poromkrets och totala porer minskade, vilket indikerar en förskjutning mot en merjämn fördelning av mindre porer. Histogram, spridningsdiagram och porintensitetsfördelninghistogram representerade visuellt denna trend. På liknande sätt ökade egenskaper som pordensitet,porositeten minskade och den specifika ytarean förblev konstant med ökande förstoring,föreslår en mer kompakt struktur. Objektiva mätningar av bildkvalitetsmått, t.exsom PSNR, RMSE, SSIM och NCC, användes. Gråskalebilder med olika förstoringarjämfördes, och det noterades att när antalet projektioner ökade, 10X vs. 20Xoch 2X vs. 4X par presterade konsekvent bra när det gäller bildkvalitet. Den tillämpademetoder, som omfattar poranalys och bildkvalitetsmått, uppvisar betydandestyrkor i att karakterisera porösa strukturer och utvärdera bildkvalitet.
6

Compression Based Analysis of Image Artifacts: Application to Satellite Images

Roman-Gonzalez, Avid 02 October 2013 (has links) (PDF)
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.

Page generated in 0.0977 seconds