• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 15
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 41
  • 31
  • 26
  • 22
  • 13
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the wavelet families for OFDM system comparisons over AWGN and Rayleigh Channels

Anoh, Kelvin O.O., Abd-Alhameed, Raed, Jones, Steven M.R., Dama, Yousef A.S., Bin-Melha, Mohammed S. January 2013 (has links)
No / In the study of OFDM systems, discrete wavelet transforms have been reported to perform better than Fast Fourier Transform in multicarrier systems (MCS) - in terms of spectral efficiency because they can operate without a cyclic prefix, have reduced side-lobes and improved BER. However all of the wavelet families do not perform alike. This study has investigated various wavelet families such as Daubechies, Symlet, Haar (or db1), biorthogonal, reverse-biorthogonal and Coiflets for OFDM system design over an AWGN and multipath channels. Results show that Daubechies, Symlet, Haar and Coiflet wavelet families perform considerably better than other families considered, thus these families could be better in OFDM.
2

Two levels block based wavelet watermarking algorithm for still colour images

Jassim, Taha D., Al-Ahmad, Hussain, Abd-Alhameed, Raed, Al-Gindy, Ahmed M.N. January 2014 (has links)
No / A robust watermarking technique is implemented for copyright protection. The proposed method is based on 2-level discrete wavelet transform (DWT). The embedded watermarking information is a mobile phone number including the international code. The first level of the DWT transformation is applied on 16×16 blocks of the host image. All the coefficients of the 8×8 low-low (LL1) first level sub-band are grouped into one matrix. The second level of the DWT is then applied to the grouped matrix from the first level transformation. The highest coefficient from the LL2 sub-band (4×4) is used for embedding the watermark information. The extracting process is blind since it does not require the original image at the receiver side. The distortion in the host image due to the watermarking process is minimal and the PSNR is greater than 60 dB. The proposed algorithm showed robustness against several attacks such as scaling, filtering, cropping, additive noise and JPEG compression.
3

Applications of Allouba's Differentiation Theory and Semi-SPDEs

Fontes, Ramiro C. 19 April 2010 (has links)
No description available.
4

Contribution aux architectures adaptatives : etude de l'efficacité énergétique dans le cas des applications à parallélisme de données / Conception of adaptif architecture : energy efficient design for parallel date application

Zhang, Xun 15 September 2009 (has links)
Cette thèse s'inscrit dans le cadre de la conception d'architectures reconfigurables. Plus précisément, il concerne les architectures matérielles adaptatives, ces dernières pouvant être modifiées du point de vue de leurs caractéristiques matérielles au cours de l'exécution d'une application. Nous présentons une méthodologie d'auto-configuration d'une architecture reconfigurable dynamiquement ainsi qu'une architecture permettant d'illustrer l'utilisation de la méthode. L'objectif de la méthode est de réduire la consommation d'énergie en garantissant le respect des contraintes à tout instant. La méthodologie proposée s'adresse aux architectures reconfigurables à grain épais, puisque l'unité fonctionnelle matérielle correspond à une fonction de haut niveau d'abstraction (IDWT, etc.), même si la réalisation de l'architecture est basée sur l'utilisation d'une structure reconfigurable à grain fin (FPGA). Le besoin d'adaptation choisi concerne principalement deux cas de figures. Premièrement, répondre aux variations dynamiques de la charge de calcul en cours de traitement : un accroissement ou une réduction du débit de données conduit à une inadéquation entre l'architecture et son environnement. Deuxièmement, s'adapter aux variations dynamiques de la structure de l'algorithme : dans certaines applications les traitements à effectuer changent en fonction des données qui arrivent. / My PhD project focuses on Dynamic Adaptive Runtime parallelism and frequency scaling techniques in coarse grain reconfigurable hardware architectures. This new architectural approach offers a set of new features to increase the flexibility and scalability for applications in an evolving environment with reasonable energy cost. In this architecture, the parallelism granularity and running frequency can be reconfigured by using partial and dynamic reconfiguration. The adaptive method and architecture have been already developed and tested on FPGA platforms. The measurements and results analysis based on DWT show that the energy efficiency is adjustable dynamically by using our approach. The main contribution to the research project involves an auto-adaptive method development; this means using partial and dynamic reconfiguration can reconfigure the parallelism granularity and running frequency of application. The adaptive method by adjusting the parallelism granularity and running frequency is tested with the same application. We are presenting results coming from implementations of Image processing key application and analyses the behavior of this architecture on these applications.
5

A DWT Based Perceptual Video Coding Framework - Concepts, Issues and Techniques

Mei, Liming, james.mei@ieee.org January 2009 (has links)
The work in this thesis explore the DWT based video coding by the introduction of a novel DWT (Discrete Wavelet Transform) / MC (Motion Compensation) / DPCM (Differential Pulse Code Modulation) video coding framework, which adopts the EBCOT as the coding engine for both the intra- and the inter-frame coder. The adaptive switching mechanism between the frame/field coding modes is investigated for this coding framework. The Low-Band-Shift (LBS) is employed for the MC in the DWT domain. The LBS based MC is proven to provide consistent improvement on the Peak Signal-to-Noise Ratio (PSNR) of the coded video over the simple Wavelet Tree (WT) based MC. The Adaptive Arithmetic Coding (AAC) is adopted to code the motion information. The context set of the Adaptive Binary Arithmetic Coding (ABAC) for the inter-frame data is redesigned based on the statistical analysis. To further improve the perceived picture quality, a Perceptual Distortion Measure (PDM) based on human vi sion model is used for the EBCOT of the intra-frame coder. A visibility assessment of the quantization error of various subbands in the DWT domain is performed through subjective tests. In summary, all these findings have solved the issues originated from the proposed perceptual video coding framework. They include: a working DWT/MC/DPCM video coding framework with superior coding efficiency on sequences with translational or head-shoulder motion; an adaptive switching mechanism between frame and field coding mode; an effective LBS based MC scheme in the DWT domain; a methodology of the context design for entropy coding of the inter-frame data; a PDM which replaces the MSE inside the EBCOT coding engine for the intra-frame coder, which provides improvement on the perceived quality of intra-frames; a visibility assessment to the quantization errors in the DWT domain.
6

Dimensionality Reduction of Hyperspectral Signatures for Optimized Detection of Invasive Species

Mathur, Abhinav 13 December 2002 (has links)
The aim of this thesis is to investigate the use of hyperspectral reflectance signals for the discrimination of cogongrass (Imperata cylindrica) from other subtly different vegetation species. Receiver operating characteristics (ROC) curves are used to determine which spectral bands should be considered as candidate features. Multivariate statistical analysis is then applied to the candidate features to determine the optimum subset of spectral bands. Linear discriminant analysis (LDA) is used to compute the optimum linear combination of the selected subset to be used as a feature for classification. Similarly, for comparison purposes, ROC analysis, multivariate statistical analysis, and LDA are utilized to determine the most advantageous discrete wavelet coefficients for classification. The overall system was applied to hyperspectral signatures collected with a handheld spectroradiometer (ASD) and to simulated satellite signatures (Hyperion). A leave-one-out testing of a nearest mean classifier for the ASD data shows that cogongrass can be detected amongst various other grasses with an accuracy as high as 87.86% using just the pure spectral bands and with an accuracy of 92.77% using the Haar wavelet decomposition coefficients. Similarly, the Hyperion signatures resulted in classification accuracies of 92.20% using just the pure spectral bands and with an accuracy of 96.82% using the Haar wavelet decomposition coefficients. These results show that hyperspectral reflectance signals can be used to reliably detect cogongrass from subtly different vegetation.
7

Hybrid DWT-DCT algorithm for image and video compression applications

Shrestha, Suchitra 23 February 2011
Digital image and video in their raw form require an enormous amount of storage capacity. Considering the important role played by digital imaging and video, it is necessary to develop a system that produces high degree of compression while preserving critical image/video information. There are various transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the most commonly used transformation. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multiresolution transformation.<p> In this work, we propose a hybrid DWT-DCT algorithm for image compression and reconstruction taking benefit from the advantages of both algorithms. The algorithm performs the Discrete Cosine Transform (DCT) on the Discrete Wavelet Transform (DWT) coefficients. Simulations have been conducted on several natural, benchmark, medical and endoscopic images. Several QCIF, high definition, and endoscopic videos have also been used to demonstrate the advantage of the proposed scheme.<p> The simulation results show that the proposed hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT, and WHT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio. The new scheme reduces false contouring and blocking artifacts significantly. The rate distortion analysis shows that for a fixed level of distortion, the number of bits required to transmit the hybrid coefficients would be less than those required for other schemes Furthermore, the proposed algorithm is also compared with the some existing hybrid algorithms. The comparison results show that, the proposed hybrid algorithm has better performance and reconstruction quality. The proposed scheme is intended to be used as the image/video compressor engine in imaging and video applications.
8

Hybrid DWT-DCT algorithm for image and video compression applications

Shrestha, Suchitra 23 February 2011 (has links)
Digital image and video in their raw form require an enormous amount of storage capacity. Considering the important role played by digital imaging and video, it is necessary to develop a system that produces high degree of compression while preserving critical image/video information. There are various transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are the most commonly used transformation. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multiresolution transformation.<p> In this work, we propose a hybrid DWT-DCT algorithm for image compression and reconstruction taking benefit from the advantages of both algorithms. The algorithm performs the Discrete Cosine Transform (DCT) on the Discrete Wavelet Transform (DWT) coefficients. Simulations have been conducted on several natural, benchmark, medical and endoscopic images. Several QCIF, high definition, and endoscopic videos have also been used to demonstrate the advantage of the proposed scheme.<p> The simulation results show that the proposed hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT, and WHT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio. The new scheme reduces false contouring and blocking artifacts significantly. The rate distortion analysis shows that for a fixed level of distortion, the number of bits required to transmit the hybrid coefficients would be less than those required for other schemes Furthermore, the proposed algorithm is also compared with the some existing hybrid algorithms. The comparison results show that, the proposed hybrid algorithm has better performance and reconstruction quality. The proposed scheme is intended to be used as the image/video compressor engine in imaging and video applications.
9

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.
10

Projeto de uma arquitetura dedicada à compressão de imagens no padrão JPEG2000 / Design of a dedicated architecture to Image compression in the JPEG2000 Standard

Silva, Sandro Vilela da January 2005 (has links)
O incremento das taxas de transmissão e de armazenamento demanda o desenvolvimento de técnicas para aumentar a taxa de compressão de imagens e ao mesmo tempo mantenha a qualidade destas imagens. O padrão JPEG2000 propõe a utilização da transformada wavelet discreta e codificação aritmética para alcançar altos graus de compressão, proporcionando que a imagem resultante tenha qualidade razoável. Este padrão permite tanto compressão com perdas como compressão sem perdas, dependendo apenas do tipo de transformada wavelet utilizada. Este trabalho propõe a implementação de blocos internos em hardware para compor um compressor de imagens com perdas seguindo o padrão JPEG2000. O principal componente deste compressor de imagens é a transformada wavelet discreta irreversível em duas dimensões, que é implementada utilizando um esquema lifting a partir dos coeficientes Daubechies 9/7 descritos na literatura. Para proporcionar altas taxas de compressão para a transformada irreversível, são utilizados coeficientes reais – que são originalmente propostos em representação de ponto-flutuante. Neste trabalho, estes coeficientes foram implementados em formato de ponto-fixo arredondado, o que resulta erros que foram estimados e controlados. Neste trabalho, várias arquiteturas em hardware para a descrição da transformada wavelet discreta irreversível em duas dimensões foram implementadas para avaliar a relação entre tipo de descrição, consumo de área e atraso de propagação. A arquitetura de melhor relação custo benefício requer 2.090 células de um dispositivo FPGA, podendo operar a até 78,72 MHz, proporcionando uma taxa de processamento de 28,2 milhões de amostras por segundo. Esta arquitetura resultou em um nível de erro médio quadrático de 0,41% para cada nível de transformada. A arquitetura implementada para o bloco do codificador de entropia foi sintetizada a partir de uma descrição comportamental, gerando um hardware capaz de processar até 843 mil coeficientes de entrada por segundo. Os resultados indicam que o compressor de imagens com perdas seguindo o padrão JPEG2000, utilizando os blocos implementados nesta dissertação e operando na máxima freqüência de operação definida, pode codificar em média 1,8 milhões de coeficientes por segundo, ou seja, até 27 frames de 256x256 pixels por segundo. Esta limitação na taxa de codificação é definida pelo codificador de entropia, que possui um algoritmo mais complexo, necessitando de um trabalho complementar para melhorar sua taxa de codificação aumentando o paralelismo do hardware. / The increasing demands for higher data transmission rates and higher data storage capacity call for the development of techniques to increase the compression rate of images while at the same time keeping the image quality. The JPEG2000 Standard proposes the use of the discrete wavelet transform and of arithmetic coding to reach high compression rates, providing reasonable quality to the resulting compressed image. This standard allows lossy as well as loss-less compression, dependent on the type of wavelet transform used. This work considers the implementation of the internal hardware blocks that comprise a lossy image compressor in hardware following the JPEG2000 standard. The main component of this image compressor is the two dimensional irreversible discrete wavelet transform, that is implemented using a lifting scheme with the Daubechies 9/7 coefficients presented in the literature. To provide high compression rates for the irreversible transform, these coefficients – originally proposed in their floating-point representation – are used. In this work, they are implemented as fixed-point rounded coefficients, incurring in errors that we estimate and control. In this work, various hardware architectures for the two dimensional irreversible discrete wavelet transform were implemented to evaluate the tradeoff between the type of description, area consumption and delay. The architecture for the best trade-off requires 2,090 logic cells of a FPGA device, being able to operate up to 78.72 MHz, providing a processing rate of 28.2 million of samples per second. This architecture resulted in 0.41% of mean quadratic error for each transformed octave. The architecture implemented for the block of the entropy encoder was synthesized from a behavioral description, generating the hardware able to process up to 843 thousands of input coefficients per second. The results indicate that the lossy image compressor following JPEG2000 standard, using the blocks implemented in this dissertation and operating in the maximum clock frequency can codify, in average, 1.8 million coefficients per second, or conversely, up to 27 frames of 256x256 pixels per second. The rate-limiting step in this case is the entropy encoder, which has a more complex algorithm that needs further work to be sped up with more parallel hardware.

Page generated in 0.0264 seconds