• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 38
  • 38
  • 19
  • 18
  • 16
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Video quality assessment based on motion models

Seshadrinathan, Kalpana, 1980- 04 September 2012 (has links)
A large amount of digital visual data is being distributed and communicated globally and the question of video quality control becomes a central concern. Unlike many signal processing applications, the intended receiver of video signals is nearly always the human eye. Video quality assessment algorithms must attempt to assess perceptual degradations in videos. My dissertation focuses on full reference methods of image and video quality assessment, where the availability of a perfect or pristine reference image/video is assumed. A large body of research on image quality assessment has focused on models of the human visual system. The premise behind such metrics is to process visual data by simulating the visual pathway of the eye-brain system. Recent approaches to image quality assessment, the structural similarity index and information theoretic models, avoid explicit modeling of visual mechanisms and use statistical properties derived from the images to formulate measurements of image quality. I show that the structure measurement in structural similarity is equivalent to contrast masking models that form a critical component of many vision based methods. I also show the equivalence of the structural and the information theoretic metrics under certain assumptions on the statistical distribution of the reference and distorted images. Videos contain many artifacts that are specific to motion and are largely temporal. Motion information plays a key role in visual perception of video signals. I develop a general, spatio-spectrally localized multi-scale framework for evaluating dynamic video fidelity that integrates both spatial and temporal aspects of distortion assessment. Video quality is evaluated in space and time by evaluating motion quality along computed motion trajectories. Using this framework, I develop a full-reference video quality assessment algorithm known as the MOtion-based Video Integrity Evaluation index, or MOVIE index. Lastly, and significantly, I conducted a large-scale subjective study on a database of videos distorted by present generation video processing and communication technology. The database contains 150 distorted videos obtained from 10 naturalistic reference videos and each video was evaluated by 38 human subjects in the study. I study the performance of leading, publicly available objective video quality assessment algorithms on this database. / text
12

Image communication system design based on the structural similarity index

Channappayya, Sumohana S., 1977- 28 August 2008 (has links)
The amount of digital image and video content being generated and shared has grown explosively in the recent past. The primary goal of image and video communication systems is to achieve the best possible visual quality at a given rate constraint and channel conditions. In this dissertation, the focus is limited to image communication systems. In order to optimize the components of the communication system to maximize perceptual quality, it is important to use a good measure of quality. Even though this fact has been long recognized, the mean squared error (MSE), which is not the best measure of perceptual quality, has been a popular choice in the design of various components of an image communication system. Recent developments in the field of image quality assessment (IQA) have resulted in the development of powerful new algorithms. A few of these new algorithms include the structural similarity (SSIM) index, the visual information fidelity (VIF) criterion, and the visual signal to noise ratio (VSNR). The SSIM index is considered in this dissertation. I demonstrate that optimizing image processing algorithms for the SSIM index does indeed result in an improvement in the perceptual quality of the processed images. All the comparisons in this thesis are made against appropriate MSE-optimal equivalents. First, an SSIM-optimal linear estimator is derived and applied to the problem of image denoising. An algorithm for SSIM-optimal linear equalization is developed and applied to the problem of image restoration. Followed by the development of the linear solution, I addressed the problem of SSIM-optimal soft thresholding which is a nonlinear technique. The estimation, equalization, and soft-thresholding results all show a gain in the visual quality compared to their MSE-optimal counterparts. These solutions are typically used at the receiver of an image communication system. On the transmitter side of the system, bounds on the SSIM index as a function of the rate allocated to a uniform quantizer are derived.
13

Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration

Lie Chin Cheong, Patrick January 1987 (has links)
No description available.
14

On optimality and efficiency of parallel magnetic resonance imaging reconstruction: challenges and solutions

Nana, Roger 12 November 2008 (has links)
Imaging speed is an important issue in magnetic resonance imaging (MRI), as subject motion during image acquisition is liable to produce artifacts in the image. However, the speed at which data can be collected in conventional MRI is fundamentally limited by physical and physiological constraints. Parallel MRI is a technique that utilizes multiple receiver coils to increase the imaging speed beyond previous limits by reducing the amount of acquired data without degrading the image quality. In order to remove the image aliasing due to k-space undersampling, parallel MRI reconstructions invert the encoding matrix that describes the net effect of the magnetic field gradient encoding and the coil sensitivity profiles. The accuracy, stability, and efficiency of a matrix inversion strategy largely dictate the quality of the reconstructed image. This thesis addresses five specific issues pertaining to this linear inverse problem with practical solutions to improve clinical and research applications. First, for reconstruction algorithms adopting a k-space interpolation approach to the linear inverse problem, two methods are introduced that automatically select the optimal k-space subset samples participating in the synthesis of a missing datum, guaranteeing an optimal compromise between accuracy and stability, i.e. the best balance between artifacts and signal-to-noise ratio (SNR). While the former is based on cross-validation re-sampling technique, the second utilizes a newly introduced data consistency error (DCE) metric that exploits the shift invariance property of the reconstruction kernel to provide a goodness measure of k-space interpolation in parallel MRI. Additionally, the utility of DCE as a metric for characterizing and comparing reconstruction methods is demonstrated. Second, a DCE-based strategy is introduced to improve reconstruction efficiency in real time parallel dynamic MRI. Third, an efficient and reliable reconstruction method that operates on gridded k-space for parallel MRI using non-Cartesian trajectories is introduced with a significant computational gain for applications involving repetitive measurements. Finally, a pulse sequence that combines parallel MRI and multi-echo strategy is introduced for improving SNR and reducing the geometric distortion in diffusion tensor imaging. In addition, the sequence inherently provides a T2 map, complementing information that can be useful for some applications.
15

Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration

Lie Chin Cheong, Patrick January 1987 (has links)
No description available.
16

Modulation transfer function measurements, image quality metrics, and subjective image quality for soft-copy color images

Jorna, Gerard C. 02 October 2007 (has links)
The effect of spatial frequency manipulation of color images and its impact on subjective image quality were examined by measuring subjective image quality and the changes in several prominent image quality metrics. The Modulation Transfer Function Area (M1F A), the Square Root Integral (SQRI), the Integrated Contrast Sensitivity (ICS), the Subjective Quality Factor (SQF), and several other acutance-derived image quality metrics were evaluated for their ability to predict image quality. Five distinct spatial frequency filters were applied to each of four pictorial color scenes in the horizontal dimension, the vertical dimension, and in a combined two dimensional format. The same spatial frequency filters also were applied in a circular format. Thirty-two subjects, 24 college students and 8 Eastman Kodak employees, participated in a paired comparison method in which 21 stimuli for each of the four scenes were evaluated for their perceived image quality. Horizontal and vertical modulation transfer functions were acquired by photometric scans of a cross-hair one-pixel wide white line target as well as by luminance profile manipulation of square-waves of differing frequencies, both utilized as input signals. From the modulation transfer functions, values for the various image quality metrics were calculated and related to the subjective image quality data. In the evaluation of perceived image quality, several experimental procedures are available, such as magnitude estimation, rank ordering, rating-scale, categorical judgments, and the method of paired comparison. The method of paired comparison is frequently avoided for its time consuming nature. However, results indicate that with the use of computer automatization it is a powerful and reliable experimental procedure for testing subjective preferences between digital stimuli containing small perceptual differences. The highest correlation between perceived image quality and image quality metrics was obtained for the pictorial scene which contained a uniform and dense set of frequencies in the horizontal and vertical directions. The lowest R 2 values were reported from the pictorial scene that contained more scattered frequencies in both directions. Therefore, it is advised when performing perceived image quality evaluations of frequency-manipulated pictorial stimuli to use those stimuli that contain a broad range and uniform set of spatial frequencies. The impact of frequency manipulation is then more apparent and, in addition, it may provide for a more reliable transfer of results across experiments. Small increases in modulation produced perceived quality increases in the pictorial color images. Furthermore, improved image quality was obtained with low frequency (less than 9 c/deg) modulation enhancement. In addition, vertical filtering produced greater subjective image quality improvement than did horizontal filtering. For all scenes, the two-dimensional filtered images were perceived as possessing equal or better quality than the circular filtered images. Low-frequency enhancement, close to zero spatial frequency, and no high-frequency enhancement, only minutely (if at all), increased perceived image quality; however, continuing the enhancement process from the low frequencies to the higher frequencies significantly improved perceived image quality. The SQRI metric is not recommended for use in the evaluation of image quality when changes in the MTFs occur at spatial frequencies of 3 cycles per degree and higher. The ICS and MTFA behaved in an acceptable manner with changes in subjective image quality and should be considered for their computational accuracy and practicality. The SQF and the acutance metrics were highly recommended for predicting subjective image quality. In addition, the development of a standardized display measurement technique for color CRTs and a standardized verification process of display image quality are recommended. / Ph. D.
17

Effects of color CRT misconvergence, target size, and nontarget density on visual search performance

Herb, Isabel Moghissi 08 June 2009 (has links)
Rapid engineering developments in electronic imaging during the last decade have led to the widespread use of color CRT displays. misconvergence among the primary colors of a shadow-mask CRT is a principle human factors concern for applications of this technology. The major objectives of this research were: (1) to determine the effect of misconvergence on visual search performance (i.e., search time and error), (2) to examine the effects of misconvergence on subjective image quality estimates, and (3) to examine the interactions among target size, nontarget density, and misconvergence type and degree upon subjective and objective human performance indices. Ten participants performed a visual "search-and-select" task on a color CRT computer workstation. Following each trial in this procedure, participants subjectively rated the image quality of the display screen using a 9-point scale. Reducing target size increased selection errors and response times, while increasing nontarget density generally increased response times. Type and degree of shadow-mask CRT misconvergence had almost no effect on visual search performance, suggesting that low levels (1 to 2 arcmin) of misconvergence may be acceptable in effective color CRT applications. However, misconvergence adversely affected subjective image quality ratings, indicating that color CRT usage should be avoided where their use is not essential. / Master of Science
18

Applying multiresolution and graph-searching techniques for boundary detection in biomedical images

Munechika, Stacy Mark, 1961- January 1989 (has links)
An edge-based segmentation scheme (i.e. boundary detector) for nuclear medicine images has been developed and consists of a multiresolutional Gaussian-based edge detector working in conjunction with a modified version of Nilsson's A* graph-search algorithm. A multiresolution technique of analyzing the edge-signature plot (edge gradient versus resolution scale) allows the edge detector to match an appropriately sized edge operator to the edge structure in order to measure the full extent of the edge and thus gain the best compromise between noise suppression and edge localization. The graph-search algorithm uses the output from the multiresolution edge detector as the primary component in a cost function which is then minimized to obtain the boundary path. The cost function can be adapted to include global information such as boundary curvature, shape, and similarity to prototype to help guide the boundary detection process in the absence of good edge information.
19

Sensor modeling and image restoration for a CCD pushbroom imager

Li, Wai-Mo, 1964- January 1987 (has links)
Following the development of detector technology, remote sensing image detection is being implemented with charge-coupled devices (CCD), which have promising features. The French SPOT system is the first civilian satellite sensor employing a CCD in its detection unit. In order to obtain the system transfer function (TF), a linear system model is developed in the across- and along-track directions. The overall system TF, including pixel sampling effects, is then used in the Wiener filter function to derive an optimal restoration function. A restoration line spread function (RLSF) is obtained by the inverse Fourier transform of the Wiener filter and multiplied with a window function. Simulation and empirical tests are described comparing the RLSF to standard kernels used for image resampling for geometric correction. The RLSF results in superior edge enhancement as expected.
20

Novel methods for scatter correction and dual energy imaging in cone-beam CT

Dong, Xue 22 May 2014 (has links)
Excessive imaging doses from repeated scans and poor image quality mainly due to scatter contamination are the two bottlenecks of cone-beam CT (CBCT) imaging. This study investigates a method that combines measurement-based scatter correction and a compressed sensing (CS)-based iterative reconstruction algorithm to generate scatter-free images from low-dose data. Scatter distribution is estimated by interpolating/extrapolating measured scatter samples inside blocked areas. CS-based iterative reconstruction is finally carried out on the under-sampled data to obtain scatter-free and low-dose CBCT images. In the tabletop phantom studies, with only 25% dose of a conventional CBCT scan, our method reduces the overall CT number error from over 220 HU to less than 25 HU, and increases the image contrast by a factor of 2.1 in the selected ROIs. Dual-energy CT (DECT) is another important application of CBCT. DECT shows promise in differentiating materials that are indistinguishable in single-energy CT and facilitates accurate diagnosis. A general problem of DECT is that decomposition is sensitive to noise in the two sets of projection data, resulting in severely degraded qualities of decomposed images. The first study of DECT is focused on the linear decomposition method. In this study, a combined method of iterative reconstruction and decomposition is proposed. The noise on the two initial CT images from separate scans becomes well correlated, which avoids noise accumulation during the decomposition process. To fully explore the benefits of DECT on beam-hardening correction and to reduce the computation cost, the second study is focused on an iterative decomposition method with a non-linear decomposition model for noise suppression in DECT. Phantom results show that our methods achieve superior performance on DECT imaging, with respect to noise reduction and spatial resolution.

Page generated in 0.1296 seconds