• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 38
  • 38
  • 19
  • 18
  • 16
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Restoration of randomly blurred images with measurement error in the point spread function

Lam, Edward W. H. January 1990 (has links)
The restoration of images degraded by a stochastic, time varying point spread func-tion(PSF) is addressed. The object to be restored is assumed to remain fixed during the observation time. A sequence of observations of the unknown object is assumed available. The true value of the random PSF is not known. However, for each observation a "noisy" measurement of the random PSF at the time of observation is assumed available. Practical applications in which the PSF is time varying include situations in which the images are obtained through a nonhomogeneous medium such as water or the earth's atmosphere. Under such conditions, it is not possible to determine the PSF in advance, so attempts must be made to extract it from the degraded images themselves. A measurement of the PSF may be obtained by either isolating a naturally occurring point object in the scene, such as a reference star in optical astronomy, or by artificially installing an impulse light source in the scene. The noise in the measurements of point spread functions obtained in such a manner are particularly troublesome in cases when the light signals emitted by the point object are not very strong. In this thesis, we formulate a model for this restoration problem with PSF measurement error. A maximum likelihood filter and a Wiener filter are then developed for this model. Restorations are performed on simulated degraded images. Comparisons are made with standard filters of the classical restoration model(ignoring the PSF error), and also with results based on the averaged degraded image and averaged PSF's. Experimental results confirm that the filters we developed perform better than those based on averaging and than those ignoring the PSF measurement error. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
2

Electric-field-induced second harmonic microscopy

Wu, Kui 28 August 2008 (has links)
Not available / text
3

The effects of aberrations in synthetic aperture systems

Hooker, Ross Brian, 1942- January 1974 (has links)
No description available.
4

On optimality and efficiency of parallel magnetic resonance imaging reconstruction challenges and solutions /

Nana, Roger. January 2008 (has links)
Thesis (Ph.D)--Biomedical Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Hu, Xiaoping; Committee Member: Keilholz, Shella; Committee Member: Mao, Hui; Committee Member: Martin, Diego; Committee Member: Oshinski, John. Part of the SMARTech Electronic Thesis and Dissertation Collection.
5

Image quality assessment using natural scene statistics

Sheikh, Hamid Rahim 28 August 2008 (has links)
Not available / text
6

Digital image noise smoothing using high frequency information

Jarrett, David Ward, 1963- January 1987 (has links)
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
7

THE KNIFE EDGE TEST AS A WAVEFRONT SENSOR (IMAGE PROCESSING).

KENKNIGHT, CHARLES ELMAN. January 1987 (has links)
An algorithm to reduce data from the knife edge test is given. The method is an extension of the theory of single sideband holography to second order effects. Application to phase microscopy is especially useful because a troublesome second order term vanishes when the knife edge does not attenuate the unscattered radiation probing the specimen. The algorithm was tested by simulation of an active optics system that sensed and corrected small (less than quarter wavelength) wavefront errors. Convergence to a null was quadratic until limited by detector-injected noise in signal. The best form of the algorithm used only a Fourier transform of the smoothed detector record, a filtering of the transform, an inverse transform, and an arctangent solving for the phase of the input wavefront deformation. Iterations were helpful only for a Wiener filtering of the data record that weighted down Fourier amplitudes smaller than the mean noise level before analysis. The simplicity and sensitivity of this wavefront sensor makes it a candidate for active optic control of small-angle light scattering in space. In real time optical processing a two dimensional signal can be applied as a voltage to a deformable mirror and be received as an intensity modulation at an output plane. Combination of these features may permit a real time null test. Application to electron microscopy should allow the finding of defocus, astigmatism, and spherical aberrations for single micrographs at 0.2 nm resolution, provided a combination of specimen and support membrane is used that permits some a priori knowledge. For some thin specimens (up to nearly 100 atom layers thick) the left-right symmetry of diffraction should allow reconstruction of the wave-front deformations caused by the specimen with double the bandpass used in each image.
8

DIGITAL COLOR IMAGE ENHANCEMENT BASED ON LUMINANCE & SATURATION.

KIM, CHEOL-SUNG. January 1987 (has links)
This dissertation analyzes the different characteristics of color images compared to monochromatic images, combines these characteristics with monochromatic image enhancement techniques, and proposes useful color image enhancement algorithms. Luminance, hue, and saturation (L-H-S) color space is selected for color image enhancement. Color luminance is shown to play the most important role in achieving good image enhancement. Color saturation also exhibits unique features which contribute to the enhancement of high frequency details and color contrast. The local windowing method, one of the most popular image processing techniques, is rigorously analyzed for the effects of window size or weighting values on the visual appearance of an image, and the subjective enhancement afforded by local image processing techniques is explained in terms of the human vision system response. The digital color image enhancement algorithms proposed are based on the observation that the enhanced luminance image results in a good color image in L-H-S color space when the chromatic components (hue, and saturation) are kept the same. The saturation component usually contains high frequency details that are not present in the luminance component. However, processing only the saturation, while keeping the luminance and the hue unchanged, is not satisfactory because the response of human vision system presents a low pass filter to the chromatic components. To exploit high frequency details of the saturation component, we take the high frequency component of the inverse saturation image, which correlates with the luminance image, and process the luminance image proportionally to this inverse saturation image. These proposed algorithms are simple to implement. The main three application areas in image enhancement: contrast enhancement, sharpness enhancement, and noise smoothing, are discussed separately. The computer processing algorithms are restricted to those which preserve the natural appearance of the scene.
9

Objective Assessment of Image Quality: Extension of Numerical Observer Models to Multidimensional Medical Imaging Studies

Lorsakul, Auranuch January 2015 (has links)
Encompassing with fields on engineering and medical image quality, this dissertation proposes a novel framework for diagnostic performance evaluation based on objective image-quality assessment, an important step in the development of new imaging devices, acquisitions, or image-processing techniques being used for clinicians and researchers. The objective of this dissertation is to develop computational modeling tools that allow comprehensive evaluation of task-based assessment including clinical interpretation of images regardless of image dimensionality. Because of advances in the development of medical imaging devices, several techniques have improved image quality where the format domain of the outcome images becomes multidimensional (e.g., 3D+time or 4D). To evaluate the performance of new imaging devices or to optimize various design parameters and algorithms, the quality measurement should be performed using an appropriate image-quality figure-of-merit (FOM). Classical FOM such as bias and variance, or mean-square error, have been broadly used in the past. Unfortunately, they do not reflect the fact that the average performance of the principal agent in medical decision-making is frequently a human observer, nor are they aware of the specific diagnostic task. The standard goal for image quality assessment is a task-based approach in which one evaluates human observer performance of a specified diagnostic task (e.g. detection of the presence of lesions). However, having a human observer performs the tasks is costly and time-consuming. To facilitate practical task-based assessment of image quality, a numerical observer is required as a surrogate for human observers. Previously, numerical observers for the detection task have been studied both in research and industry; however, little research effort has been devoted toward development of one utilized for multidimensional imaging studies (e.g., 4D). Limiting the numerical observer tools that accommodate all information embedded in a series of images, the performance assessment of a particular new technique that generates multidimensional data is complex and limited. Consequently, key questions remain unanswered about how much the image quality improved using these new multidimensional images on a specific clinical task. To address this gap, this dissertation proposes a new numerical-observer methodology to assess the improvement achieved from newly developed imaging technologies. This numerical observer approach can be generalized to exploit pertinent statistical information in multidimensional images and accurately predict the performance of a human observer over the complexity of the image domains. Part I of this dissertation aims to develop a numerical observer that accommodates multidimensional images to process correlated signal components and appropriately incorporate them into an absolute FOM. Part II of this dissertation aims to apply the model developed in Part I to selected clinical applications with multidimensional images including: 1) respiratory-gated positron emission tomography (PET) in lung cancer (3D+t), 2) kinetic parametric PET in head-and-neck cancer (3D+k), and 3) spectral computed tomography (CT) in atherosclerotic plaque (3D+e). The author compares the task-based performance of the proposed approach to that of conventional methods, evaluated based on a broadly-used signal-known-exactly /background-known-exactly paradigm, which is in the context of the specified properties of a target object (e.g., a lesion) on highly realistic and clinical backgrounds. A realistic target object is generated with specific properties and applied to a set of images to create pathological scenarios for the performance evaluation, e.g., lesions in the lungs or plaques in the artery. The regions of interest (ROIs) of the target objects are formed over an ensemble of data measurements under identical conditions and evaluated for the inclusion of useful information from different complex domains (i.e., 3D+t, 3D+k, 3D+e). This work provides an image-quality assessment metric with no dimensional limitation that could help substantially improve assessment of performance achieved from new developments in imaging that make use of high dimensional data.
10

Full-reference objective visual quality assessment for images and videos. / CUHK electronic theses & dissertations collection

January 2012 (has links)
視覺質量評估在各種多媒體應用中起到了關鍵性的作用。因為人類的視覺系統是視覺信號的最終接收髓,王觀視覺質量評估被認為是最可靠的視覺質量評估方法。然而,王觀視覺質量評估耗時、昂貴,並且不適合線上應用。因此,自動的、客觀的視覺質量評估方法已經被開發並被應用於很多實用埸合當中。最廣泛使用的客觀視覺質量度量方法,如均方差(MSE) 、峰值信噪比(PSNR) 等與人IN對視覺信號質量的判斷相距甚遠。因此,開發更準確的客觀質量度量算法將會成為未來視覺信號處理和傳輸應用成功與否的重要因素。 / 該論文主要研究全參考客觀視覺質量度量算法。主要內容分為三部分。 / 第一部分討論圖像質量評估。首先研究了一個經典的圖像質量度量算法--SSIM。提出了個新的加權方法並整合至IjSSIM 當中,提升了SSIM自可預測精度。之後,受到前面這個工作的故發,設計7 個全新的圖像質量度量算法,將噪聲分類為加性噪聲和細節失兩大類。這個算法在很多主觀質量圓像資料庫上都有很優秀的預測表現。 / 第二部分研究視頻質量評估。首先,將上面提到的全新的圓像質量度量算法通過挖掘視頻運動信息和時域相關的人眼視覺特性擴展為視頻質量度量算法。方法包括:使用基於人自民運動的時空域對比敏感度方程,使用基於運動崗量的時域視覺掩蓋,使用基於認知層面的空域整合等等。這個算法被證明對處理標清和高清序列同樣有效。其次,提出了一個測量視頻順間不一致程度的算法。該算法被整合到MSE 中,提高了MSE的預測表現。 / 上面提到的算法只考慮到了亮度噪聲。論文的最後部分通過個具體應用色差立體圓像生成究了色度噪聲。色差立體圖像是三維立體顯示技衛的其中種方法。它使在普通電視、電腦顯示器、甚至印刷品上顯示三維立體效果成為可能。我們提出了一個新的色差立體圖像生成方法。該方法工作在CIELAB彩色空間,並力圖匹配原始圖像與觀測立體圖像的色彩屬性值。 / Visual quality assessment (VQA) plays a fundamental role in multimedia applications. Since the human visual system (HVS) is the ultimate viewer of the visual information, subjective VQA is considered to be the most reliable way to evaluate visual quality. However, subjective VQA is time-consuming, expensive, and not feasible for on-line manipulation. Therefore, automatic objective VQA algorithms, or namely visual quality metrics, have been developed and widely used in practical applications. However, it is well known that the popular visual quality metrics, such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), etc., correlate poorly with the human perception of visual quality. The development of more accurate objective VQA algorithms becomes of paramount importance to the future visual information processing and communication applications. / In this thesis, full-reference objective VQA algorithms are investigated. Three parts of the work are discussed as briefly summarized below. / The first part concerns image quality assessment. It starts with the investigation of a popular image quality metric, i.e., Structural Similarity Index (SSIM). A novel weighting function is proposed and incorporated into SSIM, which leads to a substantial performance improvement in terms of matching subjective ratings. Inspired by this work, a novel image quality metric is developed by separately evaluating two distinct types of spatial distortions: detail losses and additive impairments. The pro- posed method demonstrates the state-of-the-art predictive performance on most of the publicly-available subjective quality image databases. / The second part investigates video quality assessment. We extend the proposed image quality metric to assess video quality by exploiting motion information and temporal HVS characteristics, e.g., eye movement spatio-velocity contrast sensitivity function, temporal masking using motion vectors, temporal pooling considering human cognitive behaviors, etc. It has been experimentally verified that the proposed video quality metric can achieve good performance on both standard-definition and high-definition video databases. We also propose a novel method to measure temporal inconsistency, an essential type of video temporal distortions. It is incorporated into the MSE for video quality assessment, and experiments show that it can significantly enhance MSE's predictive performance. / The aforementioned algorithms only analyze luminance distortions. In the last part, we investigate chrominance distortions for a specific application: anaglyph image generation. Anaglyph image is one of the 3D displaying techniques, which enables stereoscopic perception on traditional TVs, PC monitors, projectors, and even papers. Three perceptual color attributes are taken into account for the color distortion measure, i.e., lightness, saturation, and hue, based on which a novel anaglyph image generation algorithm is developed via approximation in the CIELAB color space. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Li, Songnan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 122-130). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Dedication --- p.ii / Acknowledgments --- p.iii / Abstract --- p.vi / Publications --- p.viii / Nomenclature --- p.xii / Contents --- p.xvii / List of Figures --- p.xx / List of Tables --- p.xxii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Overview of Subjective Visual Quality Assessment --- p.3 / Chapter 1.2.1 --- Viewing condition --- p.4 / Chapter 1.2.2 --- Candidate observer selection --- p.4 / Chapter 1.2.3 --- Test sequence selection --- p.4 / Chapter 1.2.4 --- Structure of test session --- p.5 / Chapter 1.2.5 --- Assessment procedure --- p.6 / Chapter 1.2.6 --- Post-processing of scores --- p.7 / Chapter 1.3 --- Overview of Objective Visual Quality Assessment --- p.8 / Chapter 1.3.1 --- Classification --- p.8 / Chapter 1.3.2 --- HVS-model-based metrics --- p.9 / Chapter 1.3.3 --- Engineering-based metrics --- p.21 / Chapter 1.3.4 --- Performance evaluation method --- p.28 / Chapter 1.4 --- Thesis Outline --- p.29 / Chapter I --- Image Quality Assessment --- p.32 / Chapter 2 --- Weighted Structural Similarity Index based on Local Smoothness --- p.33 / Chapter 2.1 --- Introduction --- p.33 / Chapter 2.2 --- The Structural Similarity Index --- p.33 / Chapter 2.3 --- Influence of the Smooth Region on SSIM --- p.35 / Chapter 2.3.1 --- Overall performance analysis --- p.35 / Chapter 2.3.2 --- Performance analysis for individual distortion types --- p.37 / Chapter 2.4 --- The Proposed Weighted-SSIM --- p.40 / Chapter 2.5 --- Experiments --- p.41 / Chapter 2.6 --- Summary --- p.43 / Chapter 3 --- Image Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.44 / Chapter 3.1 --- Introduction --- p.44 / Chapter 3.2 --- Motivation --- p.45 / Chapter 3.3 --- Related Works --- p.47 / Chapter 3.4 --- The Proposed Method --- p.48 / Chapter 3.4.1 --- Decoupling additive impairments and useful image contents --- p.48 / Chapter 3.4.2 --- Simulating the HVS processing --- p.56 / Chapter 3.4.3 --- Two quality measures and their combination --- p.58 / Chapter 3.5 --- Experiments --- p.59 / Chapter 3.5.1 --- Subjective quality image databases --- p.59 / Chapter 3.5.2 --- Parameterization --- p.60 / Chapter 3.5.3 --- Overall performance --- p.61 / Chapter 3.5.4 --- Statistical significance --- p.62 / Chapter 3.5.5 --- Performance on individual distortion types --- p.64 / Chapter 3.5.6 --- Hypotheses validation --- p.66 / Chapter 3.5.7 --- Complexity analysis --- p.69 / Chapter 3.6 --- Summary --- p.70 / Chapter II --- Video Quality Assessment --- p.71 / Chapter 4 --- Video Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Related Works --- p.73 / Chapter 4.3 --- The Proposed Method --- p.74 / Chapter 4.3.1 --- Framework --- p.74 / Chapter 4.3.2 --- Decoupling additive impairments and useful image contents --- p.75 / Chapter 4.3.3 --- Motion estimation --- p.76 / Chapter 4.3.4 --- Spatio-velocity contrast sensitivity function --- p.77 / Chapter 4.3.5 --- Spatial and temporal masking --- p.79 / Chapter 4.3.6 --- Two quality measures and their combination --- p.80 / Chapter 4.3.7 --- Temporal pooling --- p.81 / Chapter 4.4 --- Experiments --- p.82 / Chapter 4.4.1 --- Subjective quality video databases --- p.82 / Chapter 4.4.2 --- Parameterization --- p.83 / Chapter 4.4.3 --- With/without decoupling --- p.84 / Chapter 4.4.4 --- Overall predictive performance --- p.85 / Chapter 4.4.5 --- Performance on individual distortion types --- p.88 / Chapter 4.4.6 --- Cross-distortion performance evaluation --- p.89 / Chapter 4.5 --- Summary --- p.91 / Chapter 5 --- Temporal Inconsistency Measure --- p.92 / Chapter 5.1 --- Introduction --- p.92 / Chapter 5.2 --- The Proposed Method --- p.93 / Chapter 5.2.1 --- Implementation --- p.93 / Chapter 5.2.2 --- MSE TIM --- p.94 / Chapter 5.3 --- Experiments --- p.96 / Chapter 5.4 --- Summary --- p.97 / Chapter III --- Application related to Color and 3D Perception --- p.98 / Chapter 6 --- Anaglyph Image Generation --- p.99 / Chapter 6.1 --- Introduction --- p.99 / Chapter 6.2 --- Anaglyph Image Artifacts --- p.99 / Chapter 6.3 --- Related Works --- p.101 / Chapter 6.3.1 --- Simple anaglyphs --- p.101 / Chapter 6.3.2 --- XYZ and LAB anaglyphs --- p.102 / Chapter 6.3.3 --- Ghosting reduction methods --- p.103 / Chapter 6.4 --- The Proposed Method --- p.104 / Chapter 6.4.1 --- Gamma transfer --- p.104 / Chapter 6.4.2 --- Converting RGB to CIELAB --- p.105 / Chapter 6.4.3 --- Matching color appearance attributes in CIELAB color space --- p.106 / Chapter 6.4.4 --- Converting CIELAB to RGB --- p.110 / Chapter 6.4.5 --- Parameterization --- p.111 / Chapter 6.5 --- Experiments --- p.112 / Chapter 6.5.1 --- Subjective tests --- p.112 / Chapter 6.5.2 --- Results and analysis --- p.113 / Chapter 6.5.3 --- Complexity --- p.115 / Chapter 6.6 --- Summary --- p.115 / Chapter 7 --- Conclusions --- p.117 / Chapter 7.1 --- Contributions of the Thesis --- p.117 / Chapter 7.2 --- Future Research Directions --- p.120 / Bibliography --- p.122

Page generated in 0.1239 seconds