• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 404
  • 309
  • 42
  • 35
  • 15
  • 12
  • 10
  • 10
  • 10
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 943
  • 345
  • 145
  • 145
  • 128
  • 107
  • 96
  • 91
  • 87
  • 85
  • 80
  • 67
  • 66
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

A Quantitative Analysis of Pansharpened Images

Vijayaraj, Veeraraghavan 07 August 2004 (has links)
There has been an exponential increase in satellite image data availability. Image data are now collected with different spatial, spectral, and temporal resolutions. Image fusion techniques are used extensively to combine different images having complementary information into one single composite. The fused image has rich information that will improve the performance of image analysis algorithms. Pansharpening is a pixel level fusion technique used to increase the spatial resolution of the multispectral image using spatial information from the high resolution panchromatic image while preserving the spectral information in the multispectral image. Resolution merge, image integration, and multisensor data fusion are some of the equivalent terms used for pansharpening. Pansharpening techniques are applied for enhancing certain features not visible in either of the single data alone, change detection using temporal data sets, improving geometric correction, and enhancing classification. Various pansharpening algorithms are available in the literature, and some have been incorporated in commercial remote sensing software packages such as ERDAS Imagine® and ENVI®. The performance of these algorithms varies both spectrally and spatially. Hence evaluation of the spectral and spatial quality of the pansharpened images using objective quality metrics is necessary. In this thesis, quantitative metrics for evaluating the quality of pansharpened images have been developed. For this study, the Intensity-Hue-Saturation (IHS) based sharpening, Brovey sharpening, Principal Component Analysis (PCA) based sharpening and a Wavelet-based sharpening method is used.
222

SIGNAL DENOISING USING WAVELETS

NIBHANUPUDI, SWATHI January 2003 (has links)
No description available.
223

DEVELOPMENT AND COMPARATIVE ASSESSMENT OF CWT BASED DAMAGE DETECTION TECHNIQUES ON SIMULATED GEARBOX SIGNALS

RAGHUNATHAN, RAGHAVENDRAN 21 July 2006 (has links)
No description available.
224

Comparison of orthogonal and biorthogonal wavelets for multicarrier systems

Anoh, Kelvin O.O., Abd-Alhameed, Raed, Jones, Steven M.R., Noras, James M., Dama, Yousef A.S., Altimimi, A.M., Ali, N.T., Alkhambashi, M.S. January 2013 (has links)
No / Wavelets are constructed from the basis sets of their parent scaling functions of the two-scale dilation equation (1). Whereas orthogonal wavelets come from one orthogonal basis set, the biorthogonal wavelets project from different basis sets. Each basis set is correspondingly weighted to form filters, either highpass or lowpass, which form the constituents of quadrature mirror filter (QMF) banks. Consequently, these filters can be used to design wavelets, the differently weighted parameters contributing respective wavelet properties which influence the performance of the transforms in applications, for example multicarrier modulation. This study investigated applications for onward multicarrier modulation applications. The results show that the optimum choice of particular wavelet adopted in digital multicarrier communication signal processing may be quite different from choices in other areas of wavelet applications, for example image and video compression.
225

A wavelet-based technique for reducing noise in audio signals

Comer, K. Allen 08 June 2009 (has links)
Wavelets have received considerable attention in recent general signal processing, image processing, and pattern recognition literature, as a new method of signal analysis. This marks a transition in wavelet study from theoretical investigation to application-driven research. In this paper, wavelets and wavelet transformations are presented in a context intended to be appropriate as a first exposure to the engineer. The wavelet transform, more specifically the discrete wavelet transform, and its relationship to multiresolution analysis is then explored in a framework familiar to those versed in multirate digital signal processing concepts. Elements of the perspective offered by wavelet analysis, in contrast to the features of more conventional Fourier techniques, are examined. General procedures for wavelet-based signal processing applications are discussed and the specific application of reducing noise in audio signals examined. Within the context of this application, considerations unique to wavelet analysis are revealed and trade-offs analyzed. Finally, the results obtained from implementing the noise reduction system are presented and extensions to the technique proposed. / Master of Science
226

Wavelet-based Image Compression Using Human Visual System Models

Beegan, Andrew Peter 22 May 2001 (has links)
Recent research in transform-based image compression has focused on the wavelet transform due to its superior performance over other transforms. Performance is often measured solely in terms of peak signal-to-noise ratio (PSNR) and compression algorithms are optimized for this quantitative metric. The performance in terms of subjective quality is typically not evaluated. Moreover, the sensitivities of the human visual system (HVS) are often not incorporated into compression schemes. This paper develops new wavelet models of the HVS and illustrates their performance for various scalar wavelet and multiwavelet transforms. The performance is measured quantitatively (PSNR) and qualitatively using our new perceptual testing procedure. Our new HVS model is comprised of two components: CSF masking and asymmetric compression. CSF masking weights the wavelet coefficients according to the contrast sensitivity function (CSF)---a model of humans' sensitivity to spatial frequency. This mask gives the most perceptible information the highest priority in the quantizer. The second component of our HVS model is called asymmetric compression. It is well known that humans are more sensitive to luminance stimuli than they are to chrominance stimuli; asymmetric compression quantizes the chrominance spaces more severely than the luminance component. The results of extensive trials indicate that our HVS model improves both quantitative and qualitative performance. These trials included 14 observers, 4 grayscale images and 10 color images (both natural and synthetic). For grayscale images, although our HVS scheme lowers PSNR, it improves subjective quality. For color images, our HVS model improves both PSNR and subjective quality. A benchmark for our HVS method is the latest version of the international image compression standard---JPEG2000. In terms of subjective quality, our scheme is superior to JPEG2000 for all images; it also outperforms JPEG2000 by 1 to 3 dB in PSNR. / Master of Science
227

Efficient Algorithms for Data Analytics in Geophysical Imaging

Kump, Joseph Lee 14 June 2021 (has links)
Modern sensing systems such as distributed acoustic sensing (DAS) can produce massive quantities of geophysical data, often in remote locations. This presents significant challenges with regards to data storage and performing efficient analysis. To address this, we have designed and implemented efficient algorithms for two commonly utilized techniques in geophysical imaging: cross-correlations, and multichannel analysis of surface waves (MASW). Our cross-correlation algorithms operate directly in the wavelet domain on compressed data without requiring a reconstruction of the original signal, reducing memory costs and improving scalabiliy. Meanwhile, our MASW implementations make use of MPI parallelism and GPUs, and present a novel problem for the GPU. / Master of Science / Modern sensor designs make it easier to collect large quantities of seismic vibration data. While this data can provide valuable insight, it is difficult to effectively store and perform analysis on such a high data volume. We propose a few new, general-purpose algorithms that enable speedy use of two common methods in geophysical modeling and data analytics: crosscorrelation, which provides a measure of similarity between signals; and multichannel analysis of surface waves, which is a seismic imaging technique. Our algorithms take advantage of hardware and software typically available on modern computers, and the mathematical properties of these two methods.
228

An FPGA-based Run-time Reconfigurable 2-D Discrete Wavelet Transform Core

Ballagh, Jonathan Bartlett 20 June 2001 (has links)
FPGAs provide an ideal template for run-time reconfigurable (RTR) designs. Only recently have RTR enabling design tools that bypass the traditional synthesis and bitstream generation process for FPGAs become available. The JBits tool suite is an environment that provides support for RTR designs on Xilinx Virtex and 4K devices. This research provides a comprehensive design process description of a two-dimensional discrete wavelet transform (DWT) core using the JBits run-time reconfigurable FPGA design tool suite. Several aspects of the design process are discussed, including implementation, simulation, debugging, and hardware interfacing to a reconfigurable computing platform. The DWT lends itself to a straightforward implementation in hardware, requiring relatively simple logic for control and address generation circuitry. Through the application of RTR techniques to the DWT, this research attempts to exploit certain advantages that are unobtainable with static implementations. Performance results of the DWT core are presented, including speed of operation, resource consumption, and reconfiguration overhead times. / Master of Science
229

Denoising and contrast constancy.

McIlhagga, William H. January 2004 (has links)
No / Contrast constancy is the ability to perceive object contrast independent of size or spatial frequency, even though these affect both retinal contrast and detectability. Like other perceptual constancies, it is evidence that the visual system infers the stable properties of objects from the changing properties of retinal images. Here it is shown that perceived contrast is based on an optimal thresholding estimator of object contrast, that is identical to the VisuShrink estimator used in wavelet denoising.
230

Pedestrian detection in EO and IR video

Reilly, Vladimir 01 January 2006 (has links)
The task of determining the types of objects present in the scene, or object recognition is one of the fundamental problems of computer vision. Applications include, medical imaging, security, and multi-media database search. For example, before attempting to detect suspicious behavior, an automated surveillance system would have to determine the classes of objects that are attempting to interact. This task is adversely affected by poor quality video or images. For my thesis I addressed the problem of differentiating between pedestrians and vehicles in both Infra Red and Electro Optical videos. The problem was made quite difficult by the targets: small size, poor quality of the video as well as the precision of the moving target indicator algorithm. However combining the inverse wavelet transform (IDWI) for feature extraction and the Support Vector Machine (SVM) for actual classification provided results superior to other features, and machine learning techniques.

Page generated in 0.2766 seconds