• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 396
  • 331
  • 99
  • 66
  • 57
  • 33
  • 25
  • 18
  • 14
  • 8
  • 8
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1290
  • 561
  • 255
  • 171
  • 139
  • 134
  • 133
  • 131
  • 130
  • 105
  • 100
  • 92
  • 91
  • 89
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Human visual system informed perceptual quality assessment models for compressed medical images

Oh, Joonmi January 2000 (has links)
Hospital and clinical environments are rapidly moving toward the digital capture, processing, storage, and transmission of medical images. X-ray cardio-angiograms are used to observe coronary blood flow, diagnose arterial disease and perform coronary angioplasty or bypass surgery. The digital storage and transmission of these cardiovascular images has significant potential to improve patient care. For example, digital images enable electronic archiving, network transmission and useful manipulation of diagnostic information such as image enhancement. The efficient compression of medical images is tremendously important for economical storage and fast transmission, since digitised medical images must be of high-quality, requiring high-resolution and having a large volume in general. The use of lossily compressed images has created a need for the development of objective quality assessment metrics I measuring perceived subjective opinions by viewers for optimal compression rate/distortion trade-off. Quality assessment metrics, based on models of the human visual system, have more accurately predicted perceived quality than traditional error-based objective quality metrics. This thesis presents a proposed Multi-stage Perceptual Quality Assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible physical differences between original and processed images. MPQA produces visible distortion maps and quantitative error measured informed by considerations of the human visual system. Original and decompressed images are decomposed into different spatial frequency bands and orientations modelling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are made across frequency bands and orientations to produce a single Perceptual Distortion Visibility Map (PDVM). A Perceptual Quality Rating (PQR) is calculated from the PDVM and transformed into a one to five scale for direct comparison with the Mean Opinion Score (MOS), generally used in subjective rating. For medical applications, acceptable decompressed medical images might be those which are perceptually pleasing, contain no visible artefacts and have no loss in diagnostic content. To investigate this problem, clinical tests identifying diagnostically acceptable image reconstructions is performed and demonstrates that the proposed perceptual quality rating method has better agreement with observers' responses than objective error measurement methods. The vision models presented in the thesis are also implemented in the thresholding and quantisation stages of a compression algorithm. An HVS-informed perceptual thresholding and quantisation method is also shown to produce improved compression ratio performance with less visible distortions.
32

Wavelet-based blind deconvolution and denoising of ultrasound scans for non-destructive test applications

Taylor, Jason Richard Benjamin 20 December 2012 (has links)
A novel technique for blind deconvolution of ultrasound is introduced. Existing deconvolution techniques for ultrasound such as cepstrum-based methods and the work of Adam and Michailovich – based on Discrete Wavelet Transform (DWT) shrinkage of the log-spectrum – exploit the smoothness of the pulse log-spectrum relative to the reflectivity function to estimate the pulse. To reduce the effects of non-stationarity in the ultrasound signal on both the pulse estimation and deconvolution, the log-spectrum is time-localized and represented as the Continuous Wavelet Transform (CWT) log-scalogram in the proposed technique. The pulse CWT coefficients are estimated via DWT shrinkage of the log-scalogram and are then deconvolved by wavelet-domain Wiener filtering. Parameters of the technique are found by heuristic optimization on a training set with various quality metrics: entropy, autocorrelation 6-dB width and fractal dimension. The technique is further enhanced by using different CWT wavelets for estimation and deconvolution, similar to the WienerChop method.
33

Discriminant analysis using wavelet derived features

Wood, Mark January 2002 (has links)
This thesis examines the ability of the wavelet transform to form features which may be used successfully in a discriminant analysis. We apply our methods to two different data sets and consider the problem of selecting the 'best' features for discrimination. In the first data set, our interest is in automatically recognising the variety of a carrot from an image. After necessary image preprocessing we examine the usefulness of shape descriptors and texture features for discrimination. We show that it is better to use the different 'types' of features separately, and that the wavelet coefficients of the outline coordinates are more useful. In the second data set we consider the task of automatically identifying individual haddock from the sounds they produce. We use the smoothing property of wavelets to automatically isolate individual haddock sounds, and use the stationary wavelet transform to overcome the shift dependence of the standard wavelet transform. Again we calculate different 'types' of wavelet features and compare their usefulness in classification and show that including information on the source of the previous sound can substantially increase the correct classification rate. We also apply our techniques to recognise different species of fish which is also highly successful. In each analysis, we explore different allocation rules via regularised discriminant analysis and show that the highest classification rates obtained are only slightly better than linear discriminant analysis. We also consider the problem of selecting the best subset of features for discrimination. We propose two new measures for selecting good subsets and using a genetic algorithm we search for the 'best' subsets. We investigate the relationship between out measures and classification rates showing that our method is better than selection based on F-ratios and we also discover that our two measures are closely related.
34

A Wavelet Galerkin solution technique for the phase field model of microstructural evolution

Wang, Donglian January 2002 (has links)
No description available.
35

Wavelet-based blind deconvolution and denoising of ultrasound scans for non-destructive test applications

Taylor, Jason Richard Benjamin 20 December 2012 (has links)
A novel technique for blind deconvolution of ultrasound is introduced. Existing deconvolution techniques for ultrasound such as cepstrum-based methods and the work of Adam and Michailovich – based on Discrete Wavelet Transform (DWT) shrinkage of the log-spectrum – exploit the smoothness of the pulse log-spectrum relative to the reflectivity function to estimate the pulse. To reduce the effects of non-stationarity in the ultrasound signal on both the pulse estimation and deconvolution, the log-spectrum is time-localized and represented as the Continuous Wavelet Transform (CWT) log-scalogram in the proposed technique. The pulse CWT coefficients are estimated via DWT shrinkage of the log-scalogram and are then deconvolved by wavelet-domain Wiener filtering. Parameters of the technique are found by heuristic optimization on a training set with various quality metrics: entropy, autocorrelation 6-dB width and fractal dimension. The technique is further enhanced by using different CWT wavelets for estimation and deconvolution, similar to the WienerChop method.
36

Interpolating scaling vectors and multiwavelets in Rd a multiwavelet cookery book

Koch, Karsten January 2006 (has links)
Zugl.: Marburg, Univ., Diss., 2006
37

Interpolating scaling vectors and multiwavelets in Rd : a multiwavelet cookery book /

Koch, Karsten. January 2007 (has links)
University, Diss., 2006--Marburg.
38

Robuste Bilderkennung mit lokalen linearen Abbildungen und elastischer Graphenanpassung

Hardt, Florian. January 2006 (has links)
Stuttgart, Univ., Diss., 2006.
39

Multi-dimensional wave digital filters and wavelets Mehrdimensionale Wellendigitalfilter und Wavelets /

Gottscheber, Achim. January 1999 (has links)
Mannheim, Univ., Diss., 1998.
40

Wavelet modelling of ionospheric currents and induced magnetic fields from satellite data

Mayer, Carsten. Unknown Date (has links) (PDF)
University, Diss., 2003--Kaiserslautern.

Page generated in 0.024 seconds