• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 127
  • 75
  • 31
  • 15
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 514
  • 514
  • 107
  • 97
  • 97
  • 78
  • 72
  • 70
  • 70
  • 66
  • 64
  • 60
  • 57
  • 50
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

REFLECTED IMAGE PROCESSING FOR SPECULAR WELD POOL SURFACE MEASUREMENT

Janga, Aparna 01 January 2007 (has links)
The surface of the weld pool contains information that can be exploited to emulate a skilled human welder to better understand and control the welding process. Of the existing techniques, the method that uses the pool's specular nature to an advantage and which is relatively more cost effective, and suitable for welding environment is the one that utilizes reflected images to reconstruct 3D weld pool surface by using structured light and image processing techniques. In this thesis, an improvement has been made to the existing method by changing welding direction to obtain a denser reflected dot-matrix pattern allowing more accurate surface measurement. Then, the reflected images, obtained by capturing the reflection of a structured laser dot-matrix pattern from the pool surface through a high-speed camera with a narrow band-pass filter, are processed by a newly proposed algorithm to find the position of each reflected dot relative to its actual projection dot. This is a complicated process owing to the increased density of dots and noise induced due to the harsh environment. The obtained correspondence map may later be used by a surface reconstruction algorithm to derive the three-dimensional pool surface based on the reflection law.
52

Bearing condition monitoring using acoustic emission and vibration : the systems approach

Kaewkongka, Tonphong January 2002 (has links)
This thesis proposes a bearing condition monitoring system using acceleration and acoustic emission (AE) signals. Bearings are perhaps the most omnipresent machine elements and their condition is often critical to the success of an operation or process. Consequently, there is a great need for a timely knowledge of the health status of bearings. Generally, bearing monitoring is the prediction of the component's health or status based on signal detection, processing and classification in order to identify the causes of the problem. As the monitoring system uses both acceleration and acoustic emission signals, it is considered a multi-sensor system. This has the advantage that not only do the two sensors provide increased reliability they also permit a larger range of rotating speeds to be monitored successfully. When more than one sensor is used, if one fails to work properly the other is still able to provide adequate monitoring. Vibration techniques are suitable for higher rotating speeds whilst acoustic emission techniques for low rotating speeds. Vibration techniques investigated in this research concern the use of the continuous wavelet transform (CWT), a joint time- and frequency domain method, This gives a more accurate representation of the vibration phenomenon than either time-domain analysis or frequency- domain analysis. The image processing technique, called binarising, is performed to produce binary image from the CWT transformed image in order to reduce computational time for classification. The back-propagation neural network (BPNN) is used for classification. The AE monitoring techniques investigated can be categorised, based on the features used, into: 1) the traditional AE parameters of energy, event duration and peak amplitude and 2) the statistical parameters estimated from the Weibull distribution of the inter-arrival times of AE events in what is called the STL method. Traditional AE parameters of peak amplitude, energy and event duration are extracted from individual AE events. These events are then ordered, selected and normalised before the selected events are displayed in a three-dimensional Cartesian feature space in terms of the three AE parameters as axes. The fuzzy C-mean clustering technique is used to establish the cluster centres as signatures for different machine conditions. A minimum distance classifier is then used to classify incoming AE events into the different machine conditions. The novel STL method is based on the detection of inter-arrival times of successive AE events. These inter-arrival times follow a Weibull distribution. The method provides two parameters: STL and L63 that are derived from the estimated Weibull parameters of the distribution's shape (y), characteristic life (0) and guaranteed life (to). It is found that STL and 43 are related hyperbolically. In addition, the STL value is found to be sensitive to bearing wear, the load applied to the bearing and the bearing rotating speed. Of the three influencing factors, bearing wear has the strongest influence on STL and L63. For the proposed bearing condition monitoring system to work, the effects of load and speed on STL need to be compensated. These issues are resolved satisfactorily in the project.
53

Redução de ruído em sinais de voz no domínio wavelet

Duarte, Marco Aparecido Queiroz [UNESP] 01 February 2005 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:30:51Z (GMT). No. of bitstreams: 0 Previous issue date: 2005-02-01Bitstream added on 2014-06-13T20:00:56Z : No. of bitstreams: 1 duarte_maq_dr_ilha.pdf: 2208096 bytes, checksum: 7daf91683010b0f39c715c9cc1ded5d8 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho é feito um estudo sobre os métodos de redução de ruído aditivo em sinais de voz baseados em wavelets e, através deste estudo, propõe-se um novo método de redução de ruído em sinais de voz no domínio wavelet. O princípio básico da maioria dos métodos de redução de ruído baseados em wavelets é a determinação e aplicação de um limiar, que permite bons resultados para sinais contaminados por ruído branco, mas não são eficientes no processamento de sinais contaminados por ruído colorido, que é o tipo de ruído mais comum em situações reais. Nesses métodos, o limiar, geralmente, é calculado nos intervalos de silêncio e aplicado em todo o sinal. Os coeficientes no domínio wavelet são comparados com este limiar e aqueles que estão abaixo deste valor são eliminados, fazendo assim uma aplicação linear deste limiar. Esta eliminação acaba causando descontinuidades no tempo e na freqüência no sinal processado. Além disso, a forma com que o limiar é calculado pode degradar os trechos de voz do sinal processado, principalmente nos casos em que o limiar depende fortemente da última janela do último trecho de silêncio. O método proposto neste trabalho também é baseado em corte por limiar, mas em vez de uma aplicação linear do limiar, ele faz uma aplicação não-linear, o que evita as descontinuidades causadas por outros algoritmos. O limiar é calculado nos trechos de silêncio e não depende apenas da última janela do último trecho de silêncio, mas sim de todas as janelas, já que este limiar é uma média de todos os limiares calculados neste trecho. Isto faz com que a redução do ruído seja mais uniforme e introduza menos distorções no sinal processado. Além disso, nos trechos de voz ainda é calculado um novo limiar que também será usado, em conjunto com o limiar calculado no silêncio. Isto faz com que a energia da janela que... . / In this work a study of additive noise reduction in speech based on wavelets is presented and, based on this study a new noise reduction method in speech in the wavelet domain is proposed. The basic idea of most methods of noise reduction based on wavelets is the determination and application of a threshold, that produces good results for signals contaminated by white noise, but they are not very efficient in processing signals contaminated by colored noise, which is more common in real situations. In those methods, the threshold, generally, is calculated in the silence intervals and applied to the whole signal. The coefficients in the wavelet domain are compared with this threshold and those that are below this value are eliminated, making a linear application of this threshold. This elimination causes discontinuities in time and frequency of the processed signal. Besides, the way that the threshold is computed can degrade the voice segments of the processed signal, principally when the threshold depends strongly on the last window of the last silence segment. The proposed method in this work is also based in thresholding, but, instead of a linear application of the threshold, it makes a non-linear application, which avoids the discontinuities caused by other algorithms. The threshold is calculated in the silence segments and is not dependent only on the last window of the last silence segment, but of all the windows, since this threshold is an average of all thresholds calculated in this segment. It makes noise reduction more uniform and introduces less distortion in the processed signal. Besides, in the voice segments a new threshold is calculated that will be also used with the threshold calculated in the silence. It makes that the energy of the window that is being processed is also considered. This way, it is... (Complete abstract, click electronic address below).
54

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
55

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
56

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
57

Filtrace signálů EKG s využitím vlnkové transformace / Wavelet filtering of ECG Signals

Šugra, Marián January 2011 (has links)
This masters thesis is focused on filtering the ECG signal for suppression of spurious frequency components of the network. The theoretical part is talking about electrocardiography, ECG signal interference and about principle different types of filtration. In practical part of this thesis are described linear filtering methods and wavelet transform methods with discrete time. The main topic of this work is recommended the best type of filtration.
58

A Discrete Wavelet Transform GAN for NonHomogeneous Dehazing

Fu, Minghan January 2021 (has links)
Hazy images are often subject to color distortion, blurring and other visible quality degradation. Some existing CNN-based methods have shown great performance on removing the homogeneous haze, but they are not robust in the non-homogeneous case. The reason is twofold. Firstly, due to the complicated haze distribution, texture details are easy to get lost during the dehazing process. Secondly, since the training pairs are hard to be collected, training on limited data can easily lead to the over-fitting problem. To tackle these two issues, we introduce a novel dehazing network using the 2D discrete wavelet transform, namely DW-GAN. Specifically, we propose a two-branch network to deal with the aforementioned problems. By utilizing the wavelet transform in the DWT branch, our proposed method can retain more high-frequency information in feature maps. To prevent over-fitting, ImageNet pre-trained Res2Net is adopted in the knowledge adaptation branch. Owing to the robust feature representations of ImageNet pre-training, the generalization ability of our network is improved dramatically. Finally, a patch-based discriminator is used to reduce artifacts of the restored images. Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art quantitatively and qualitatively. / Thesis / Master of Applied Science (MASc)
59

Automated Accident Detection In Intersections Via Digital Audio Signal Processing

Balraj, Navaneethakrishnan 13 December 2003 (has links)
The aim of this thesis is to design a system for automated accident detection in intersections. The input to the system is a three-second audio signal. The system can be operated in two modes: two-class and multi-class. The output of the two-class system is a label of ?crash? or ?non-crash?. In the multi-class system, the output is the label of ?crash? or various non-crash incidents including ?pile drive?, ?brake?, and ?normal-traffic? sounds. The system designed has three main steps in processing the input audio signal. They are: feature extraction, feature optimization and classification. Five different methods of feature extraction are investigated and compared; they are based on the discrete wavelet transform, fast Fourier transform, discrete cosine transform, real cepstrum transform and Mel frequency cepstral transform. Linear discriminant analysis (LDA) is used to optimize the features obtained in the feature extraction stage by linearly combining the features using different weights. Three types of statistical classifiers are investigated and compared: the nearest neighbor, nearest mean, and maximum likelihood methods. Data collected from Jackson, MS and Starkville, MS and the crash signals obtained from Texas Transportation Institute crash test facility are used to train and test the designed system. The results showed that the wavelet based feature extraction method with LDA and maximum likelihood classifier is the optimum design. This wavelet-based system is computationally inexpensive compared to other methods. The system produced classification accuracies of 95% to 100% when the input signal has a signal-to-noise-ratio of at least 0 decibels. These results show that the system is capable of effectively classifying ?crash? or ?non-crash? on a given input audio signal.
60

Multiresolution based, multisensor, multispectral image fusion

Pradhan, Pushkar S 06 August 2005 (has links)
Spaceborne sensors, which collect imagery of the Earth in various spectral bands, are limited by the data transmission rates. As a result the multispectral bands are transmitted at a lower resolution and only the panchromatic band is transmitted at its full resolution. The information contained in the multispectral bands is an invaluable tool for land use mapping, urban feature extraction, etc. However, the limited spatial resolution reduces the appeal and value of this information. Pan sharpening techniques enhance the spatial resolution of the multispectral imagery by extracting the high spatial resolution of the panchromatic band and adding it to the multispectral images. There are many different pan sharpening methods available like the ones based on the Intensity-Hue-Saturation and the Principal Components Analysis transformation. But these methods cause heavy spectral distortion of the multispectral images. This is a drawback if the pan sharpened images are to be used for classification based applications. In recent years, multiresolution based techniques have received a lot of attention since they preserve the spectral fidelity in the pan sharpened images. Many variations of the multiresolution based techniques exist. They differ based on the transform used to extract the high spatial resolution information from the images and the rules used to synthesize the pan sharpened image. The superiority of many of the techniques has been demonstrated by comparing them with fairly simple techniques like the Intensity-Hue-Saturation or the Principal Components Analysis. Therefore there is much uncertainty in the pan sharpening community as to which technique is the best at preserving the spectral fidelity. This research investigates these variations in order to find an answer to this question. An important parameter of the multiresolution based methods is the number of decomposition levels to be applied. It is found that the number of decomposition levels affects both the spatial and spectral quality of the pan sharpened images. The minimum number of decomposition levels required to fuse the multispectral and panchromatic images was determined in this study for image pairs with different resolution ratios and recommendations are made accordingly.

Page generated in 0.4468 seconds