• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 20
  • 16
  • 10
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 110
  • 52
  • 52
  • 45
  • 42
  • 38
  • 33
  • 29
  • 28
  • 24
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Adaptive Spatio-temporal Filtering of 4D CT-Heart

Andersson, Mats, Knutsson, Hans January 2013 (has links)
The aim of this project is to keep the x-ray exposure of the patient as low as reasonably achievable while improving the diagnostic image quality for the radiologist. The means to achieve these goals is to develop and evaluate an ecient adaptive ltering (denoising/image enhancement) method that fully explores true 4D image acquisition modes. The proposed prototype system uses a novel lter set having directional lter responses being monomials. The monomial lter concept is used both for estimation of local structure and for the anisotropic adaptive ltering. Initial tests on clinical 4D CT-heart data with ECG-gated exposure has resulted in a signicant reduction of the noise level and an increased detail compared to 2D and 3D methods. Another promising feature is that the reconstruction induced streak artifacts which generally occur in low dose CT are remarkably reduced in 4D.
32

Perceptual Video Quality Assessment and Enhancement

Zeng, Kai 12 August 2013 (has links)
With the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality. This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels.
33

Application of Noise Invalidation Denoising in MRI

Elahi, Pegah January 2012 (has links)
Magnetic Resonance Imaging (MRI) is a common medical imaging tool that have beenused in clinical industry for diagnostic and research purposes. These images are subjectto noises while capturing the data that can eect the image quality and diagnostics.Therefore, improving the quality of the generated images from both resolution andsignal to noise ratio (SNR) perspective is critical. Wavelet based denoising technique isone of the common tools to remove the noise in the MRI images. The noise is eliminatedfrom the detailed coecients of the signal in the wavelet domain. This can be done byapplying thresholding methods. The main task here is to nd an optimal threshold andkeep all the coecients larger than this threshold as the noiseless ones. Noise InvalidationDenoising technique is a method in which the optimal threshold is found by comparingthe noisy signal to a noise signature (function of noise statistics). The original NIDeapproach is developed for one dimensional signals with additive Gaussian noise. In thiswork, the existing NIDe approach has been generalized for applications in MRI imageswith dierent noise distribution. The developed algorithm was tested on simulated datafrom the Brainweb database and compared with the well-known Non Local Mean lteringmethod for MRI. The results indicated better detailed structural preserving forthe NIDe approach on the magnitude data while the signal to noise ratio is compatible.The algorithm shows an important advantageous which is less computational complexitythan the NLM method. On the other hand, the Unbiased NLM technique is combinedwith the proposed technique, it can yield the same structural similarity while the signalto noise ratio is improved.
34

Curvelet imaging and processing : adaptive multiple elimination

Herrmann, Felix J., Verschuur, Eric January 2004 (has links)
Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatial-temporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
35

Perceptual Video Quality Assessment and Enhancement

Zeng, Kai 12 August 2013 (has links)
With the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality. This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels.
36

Numerical Algorithms for Discrete Models of Image Denoising

Zhao, Hanqing Unknown Date
No description available.
37

Image Quality of Digital Breast Tomosynthesis: Optimization in Image Acquisition and Reconstruction

Wu, Gang 01 September 2014 (has links)
Breast cancer continues to be the most frequently diagnosed cancer in Canadian women. Currently, mammography is the clinically accepted best modality for breast cancer detection and the regular use of screening has been shown to contribute to the reduced mortality. However, mammography suffers from several drawbacks which limit its sensitivity and specificity. As a potential solution, digital breast tomosynthesis (DBT) uses a limited number (typically 10-20) of low-dose x-ray projections to produce a three-dimensional tomographic representation of the breast. The reconstruction of DBT images is challenged by such incomplete sampling. The purpose of this thesis is to evaluate the effect of image acquisition parameters on image quality of DBT for various reconstruction techniques and to optimize these, with three specific goals: A) Develop a better power spectrum estimator for detectability calculation as a task-based image quality index; B) Develop a paired-view algorithm for artifact removal in DBT reconstruction; and C) Increase dose efficiency in DBT by reducing random noise. A better power spectrum estimator was developed using a multitaper technique, which yields reduced bias and variance in estimation compared to the conventional moving average method. This gives us an improved detectability measurement with finer frequency steps. The paired-view scheme in DBT reconstruction provides better image quality than the commonly used sequential method. A simple ordering like the “side-to-side” method can achieve less artifact and higher image quality in reconstructed slices. The new denoising algorithm developed was applied to the projection views acquired in DBT before reconstruction. The random noise was markedly removed while the anatomic details were maintained. With the help of this artifact-removal technique used in reconstruction and the denoising method employed on the projection views, the image quality of DBT is enhanced and lesions should be more readily detectable.
38

Numerical Algorithms for Discrete Models of Image Denoising

Zhao, Hanqing 11 1900 (has links)
In this thesis, we develop some new models and efficient algorithms for image denoising. The total variation model of Rudin, Osher, and Fatemi(ROF) for image denoising is considered to be one of the most successful deterministic denoising models. It exploits the non-smooth total variation (TV) semi-norm to preserve discontinuities and to keep the edges of smooth regions sharp. Despite its simple form, the TV semi-norm results in a strongly nonlinear Euler-Lagrange equation and poses computational challenge in solving the model efficiently. Moreover, this model produces so-called staircase effect. In this thesis, we propose several new algorithms and models to solve these problems. We study the discretized ROF model and propose a new algorithm which does not involve partial differential equations. Convergence of the algorithm is analyzed. Numerical results show that this algorithm is efficient and stable. We then introduce a denoising model which utilizes high-order difference to approximate piece-wise smooth functions. This model eliminates undesirable staircases, and improves both visual quality and signal-to-noise ratio. Our algorithm is generalized to solve the high-order models. A relaxation technique is proposed for the iteration scheme, aiming to accelerate our solution process. Finally, we propose a method combining total variation and wavelet packets to improve performance on texture-rich images. The ROF model is utilized to eliminate noise, and a wavelet packet transform is used to enhance textures. The numerical results show that the combinational method exploits the advantages of both total variation and wavelet packets. / Mathematics
39

A Novel Image Retrieval Strategy Based on VPD and Depth with Pre-Processing

Wang, Tianyang 01 August 2015 (has links)
This dissertation proposes a comprehensive working flow for image retrieval. It contains four components: denoising, restoration, color features extraction, and depth feature extraction. We propose a visual perceptual descriptor (VPD) to extract color features from an image. Gradient direction is calculated at each pixel, and the VPD is moved over the entire image to locate regions with similar gradient direction. Color features are extracted only at these pixels. Experiments demonstrate that VPD is an effective and reliable descriptor in image retrieval. We propose a novel depth feature for image retrieval. Regarding any 2D image as the convolution of a corresponding sharp image and a Gaussian kernel with unknown blur amount. Sparse depth map is computed as the absolute difference of the original image and its sharp version. Depth feature is extracted as the nuclear norm of the sparse depth map. Experiments validate the effectiveness of this approach on depth recovery and image retrieval. We present a model for image denoising. A gradient item is incorporated, and can be merged into the original model based on geometric measure theory. Experiments illustrate this model is effective for image denoising, and it can improve the retrieval performance by denoising a query image. A model is proposed for image restoration. It is an extension of the traditional singular value thresholding (SVT) algorithm, addressing the issue that SVT cannot recover a matrix with missing rows or columns. Proposed is a way to fill such rows and columns, and then apply SVT to restore the damaged image. The pre-filled entries are recomputed by averaging its neighboring pixels. Experiments demonstrate the effectiveness of this model on image restoration, and it can improve the retrieval performance by restoring a damaged query image. Finally, the capability of this working flow is tested. Experiments demonstrate its effectiveness in image retrieval.
40

A Level Set Approach for Denoising and Adaptively Smoothing Complex Geometry Stereolithography Files

January 2014 (has links)
abstract: Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an otherwise closed surface. A general method for denoising and adaptively smoothing these dirty stereolithography files is proposed. Unlike existing means, this approach aims to smoothen the dirty surface representation by utilizing the well established levelset method. The level of smoothing and denoising can be set depending on a per-requirement basis by means of input parameters. Once the surface representation is smoothened as desired, it can be extracted as a standard levelset scalar isosurface. The approach presented in this thesis is also coupled to a fully unstructured Cartesian mesh generation library with built-in localized adaptive mesh refinement (AMR) capabilities, thereby ensuring lower computational cost while also providing sufficient resolution. Future work will focus on implementing tetrahedral cuts to the base hexahedral mesh structure in order to extract a fully unstructured hexahedra-dominant mesh describing the STL geometry, which can be used for fluid flow simulations. / Dissertation/Thesis / Masters Thesis Aerospace Engineering 2014

Page generated in 0.0628 seconds