Spelling suggestions: "subject:"image coequality essessment."" "subject:"image coequality bioassessment.""
31 |
Image/video compression and quality assessment based on wavelet transformGao, Zhigang 14 September 2007 (has links)
No description available.
|
32 |
Learning Based Image Analysis - Quality Assessment, Tracking and ClassificationJustin Yang (19184554) 21 July 2024 (has links)
<p dir="ltr">This dissertation presents four distinct studies in the fields of image processing and machine learning, focusing on applications ranging from quality assessment for raster images in scanned document and virtual reality facial expression tracking to compression for continual learning and food image classification. First, we shift the traditional focus of image quality assessment (IQA) from natural images to scanned documents, proposing a machine learning-based classification method to evaluate the visual quality of scanned raster images. We enhance the classifier's performance using augmented data generated through noise models simulating scanning degradation. Second, we address the challenges of virtual facial animation in immersive VR, developing a domain adversarial training model to generate domain invariant features and combined it with manifold learning methods for accurate facial action unit (AU) intensity estimation from partially occluded facial images. Third, we explore the use of image compression to increase buffer capacity in continual machine learning systems, thereby enhancing exemplar diversity and mitigating catastrophic forgetting. Our approach includes a new framework that selects compression rate and algorithm, showing significant improvements in image classification accuracy on the CIFAR-100 and ImageNet datasets. Finally, we combine class-activation maps with neural image compression in food image classification systems to adapt to continuously evolving data, extending buffer size and enhancing data diversity, which is validated on food-specific datasets and shows potential for broader applications in continual machine learning systems. Together, these studies demonstrate the versatility of image processing and machine learning techniques in addressing complex and varied challenges across different domains.</p>
|
33 |
Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris ImagesYoumaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
|
34 |
Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris ImagesYoumaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
|
35 |
SSIM-Inspired Quality Assessment, Compression, and Processing for Visual CommunicationsRehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance.
Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
|
36 |
SSIM-Inspired Quality Assessment, Compression, and Processing for Visual CommunicationsRehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance.
Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
|
37 |
SSIM metodo taikymas didelių vaizdų analizei / SSIM method application for large image analysisTichonov, Jevgenij 07 August 2013 (has links)
Darbe nagrinėjamas vienas iš vaizdų kokybės vertinimo metodų (metrikų) – SSIM (struktūrinio panašumo) indekso metodas bei šio metodo naudojimas tiriant didelius vaizdus. Darbo eigoje: • nustatyta kai kurių įgyvendintų SSIM indekso algoritmų problematika, vertinant aukštos raiškos vaizdus; • nustatytos gaunamų skaitinių reikšmių priklausomybės nuo tiriamų vaizdų dydžio; • pagrindžiamas vaizdo duomenų mažinimas SSIM indekso algoritmuose; • pasiūlyti tam tikri sprendimai SSIM indekso algoritmo sudarymui, skirto didelės raiškos vaizdų vertinimui; • palyginti SSIM indekso algoritmų veikimo laikai tarp skirtingų algoritmų; • sukurta programinė įranga, kuri yra pritaikyta Windows operacinei sistemai bei gali būti patogiai įdiegta kompiuteryje. Programoje: – patobulintas SSIM indekso įgyvendinimo algoritmas; – atvaizduojamas SSIM skirtumų žemėlapis; – sukurta patogi vartotojui vizualinė aplinka. Realizuota programinė įranga gali būti naudojama edukaciniais tikslais bei užsakomiesiems apdorotų vaizdų kokybės vertinimo tyrimams. / The paper analyzes one of image quality assessment methods (metrics) – SSIM (structural similarity) index method, and this method in order to analyze the large images. In work process: • problems of some SSIM index algorithms for high-resolution images have been identified; • dependence of image size and SSIM index values has been found; • some solutions for SSIM index algorithm for high-resolution images have been proposed; • the image data down sampling in SSIM index algorithms has justified; • SSIM index algorithm run times between different algorithms has been compared; • Software which is designed for MS Windows operating system and can be easily installed on the computer has been developed. In this software: – SSIM index algorithm is updated; – program Displays the SSIM index map; – User-friendly visual environment is developed. Implemented software can be used for educational purposes and commercial use for analyzing processed image quality assessment.
|
38 |
Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris ImagesYoumaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
|
39 |
Local Phase Coherence Measurement for Image Analysis and ProcessingHassen, Rania Khairy Mohammed January 2013 (has links)
The ability of humans to perceive significant pattern and structure of an image is something which humans take for granted. We can recognize objects and patterns independent of changes in image contrast and illumination. In the past decades, it has been widely recognized in both biology and computer vision that phase contains critical information in characterizing the structures in images.
Despite the importance of local phase information and its significant success in many computer vision and image processing applications, the coherence behavior of local phases at scale-space is not well understood. This thesis concentrates on developing an invariant image representation method based on local phase information. In particular, considerable effort is devoted to study the coherence relationship between local phases at different scales in the vicinity of image features and to develop robust methods to measure the strength of this relationship. A computational framework that computes local phase coherence (LPC) intensity with arbitrary selections in the number of coefficients, scales, as well as the scale ratios between them has been developed. Particularly, we formulate local phase prediction as an optimization problem, where the objective function computes the closeness between true local phase and the predicted phase by LPC. The proposed framework not only facilitates flexible and reliable computation of LPC, but also broadens the potentials of LPC in many applications.
We demonstrate the potentials of LPC in a number of image processing applications. Firstly, we have developed a novel sharpness assessment algorithm, identified as LPC-Sharpness Index (LPC-SI), without referencing the original image. LPC-SI is tested using four subject-rated publicly-available image databases, which demonstrates competitive performance when compared with state-of-the-art algorithms. Secondly, a new fusion quality assessment algorithm has been developed to objectively assess the performance of existing fusion algorithms. Validations over our subject-rated multi-exposure multi-focus image database show good correlations between subjective ranking score and the proposed image fusion quality index. Thirdly, the invariant properties of LPC measure have been employed to solve image registration problem where inconsistency in intensity or contrast patterns are the major challenges. LPC map has been utilized to estimate image plane transformation by maximizing weighted mutual information objective function over a range of possible transformations. Finally, the disruption of phase coherence due to blurring process is employed in a multi-focus image fusion algorithm. The algorithm utilizes two activity measures, LPC as sharpness activity measure along with local energy as contrast activity measure. We show that combining these two activity measures result in notable performance improvement in achieving both maximal contrast and maximal sharpness simultaneously at each spatial location.
|
40 |
Local Phase Coherence Measurement for Image Analysis and ProcessingHassen, Rania Khairy Mohammed January 2013 (has links)
The ability of humans to perceive significant pattern and structure of an image is something which humans take for granted. We can recognize objects and patterns independent of changes in image contrast and illumination. In the past decades, it has been widely recognized in both biology and computer vision that phase contains critical information in characterizing the structures in images.
Despite the importance of local phase information and its significant success in many computer vision and image processing applications, the coherence behavior of local phases at scale-space is not well understood. This thesis concentrates on developing an invariant image representation method based on local phase information. In particular, considerable effort is devoted to study the coherence relationship between local phases at different scales in the vicinity of image features and to develop robust methods to measure the strength of this relationship. A computational framework that computes local phase coherence (LPC) intensity with arbitrary selections in the number of coefficients, scales, as well as the scale ratios between them has been developed. Particularly, we formulate local phase prediction as an optimization problem, where the objective function computes the closeness between true local phase and the predicted phase by LPC. The proposed framework not only facilitates flexible and reliable computation of LPC, but also broadens the potentials of LPC in many applications.
We demonstrate the potentials of LPC in a number of image processing applications. Firstly, we have developed a novel sharpness assessment algorithm, identified as LPC-Sharpness Index (LPC-SI), without referencing the original image. LPC-SI is tested using four subject-rated publicly-available image databases, which demonstrates competitive performance when compared with state-of-the-art algorithms. Secondly, a new fusion quality assessment algorithm has been developed to objectively assess the performance of existing fusion algorithms. Validations over our subject-rated multi-exposure multi-focus image database show good correlations between subjective ranking score and the proposed image fusion quality index. Thirdly, the invariant properties of LPC measure have been employed to solve image registration problem where inconsistency in intensity or contrast patterns are the major challenges. LPC map has been utilized to estimate image plane transformation by maximizing weighted mutual information objective function over a range of possible transformations. Finally, the disruption of phase coherence due to blurring process is employed in a multi-focus image fusion algorithm. The algorithm utilizes two activity measures, LPC as sharpness activity measure along with local energy as contrast activity measure. We show that combining these two activity measures result in notable performance improvement in achieving both maximal contrast and maximal sharpness simultaneously at each spatial location.
|
Page generated in 0.0686 seconds