• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 398
  • 331
  • 99
  • 66
  • 57
  • 33
  • 25
  • 18
  • 14
  • 8
  • 8
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1292
  • 562
  • 256
  • 171
  • 139
  • 135
  • 133
  • 131
  • 130
  • 105
  • 100
  • 92
  • 91
  • 89
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Hardware implementation of daubechies wavelet transforms using folded AIQ mapping

Islam, Md Ashraful 22 September 2010
The Discrete Wavelet Transform (DWT) is a popular tool in the field of image and video compression applications. Because of its multi-resolution representation capability, the DWT has been used effectively in applications such as transient signal analysis, computer vision, texture analysis, cell detection, and image compression. Daubechies wavelets are one of the popular transforms in the wavelet family. Daubechies filters provide excellent spatial and spectral locality-properties which make them useful in image compression.<p> In this thesis, we present an efficient implementation of a shared hardware core to compute two 8-point Daubechies wavelet transforms. The architecture is based on a new two-level folded mapping technique, an improved version of the Algebraic Integer Quantization (AIQ). The scheme is developed on the factorization and decomposition of the transform coefficients that exploits the symmetrical and wrapping structure of the matrices. The proposed architecture is parallel, pipelined, and multiplexed. Compared to existing designs, the proposed scheme reduces significantly the hardware cost, critical path delay and power consumption with a higher throughput rate.<p> Later, we have briefly presented a new mapping scheme to error-freely compute the Daubechies-8 tap wavelet transform, which is the next transform of Daubechies-6 in the Daubechies wavelet series. The multidimensional technique maps the irrational transformation basis coefficients with integers and results in considerable reduction in hardware and power consumption, and significant improvement in image reconstruction quality.
272

Nonlinear Time-Frequency Control Theory with Applications

Liu, Mengkun 1978- 14 March 2013 (has links)
Nonlinear control is an important subject drawing much attention. When a nonlinear system undergoes route-to-chaos, its response is naturally bounded in the time-domain while in the meantime becoming unstably broadband in the frequency-domain. Control scheme facilitated either in the time- or frequency-domain alone is insufficient in controlling route-to-chaos, where the corresponding response deteriorates in the time and frequency domains simultaneously. It is necessary to facilitate nonlinear control in both the time and frequency domains without obscuring or misinterpreting the true dynamics. The objective of the dissertation is to formulate a novel nonlinear control theory that addresses the fundamental characteristics inherent of all nonlinear systems undergoing route-to-chaos, one that requires no linearization or closed-form solution so that the genuine underlying features of the system being considered are preserved. The theory developed herein is able to identify the dynamic state of the system in real-time and restrain time-varying spectrum from becoming broadband. Applications of the theory are demonstrated using several engineering examples including the control of a non-stationary Duffing oscillator, a 1-DOF time-delayed milling model, a 2-DOF micro-milling system, unsynchronized chaotic circuits, and a friction-excited vibrating disk. Not subject to all the mathematical constraint conditions and assumptions upon which common nonlinear control theories are based and derived, the novel theory has its philosophical basis established in the simultaneous time-frequency control, on-line system identification, and feedforward adaptive control. It adopts multi-rate control, hence enabling control over nonstationary, nonlinear response with increasing bandwidth ? a physical condition oftentimes fails the contemporary control theories. The applicability of the theory to complex multi-input-multi-output (MIMO) systems without resorting to mathematical manipulation and extensive computation is demonstrated through the multi-variable control of a micro-milling system. The research is of a broad impact on the control of a wide range of nonlinear and chaotic systems. The implications of the nonlinear time-frequency control theory in cutting, micro-machining, communication security, and the mitigation of friction-induced vibrations are both significant and immediate.
273

A Real-time, Low-latency, Fpga Implementation Of The Two Dimensional Discrete Wavelet Transform

Benderli, Oguz 01 August 2003 (has links) (PDF)
This thesis presents an architecture and an FPGA implementation of the two dimensional discrete wavelet transformation (DWT) for applications where row-based raw image data is streamed in at high bandwidths and local buffering of the entire image is not feasible. The architecture is especially suited for multi-spectral imager systems, such as on board an imaging satellite, however can be used in any application where time to next image constraints require real-time processing of multiple images. The latency that is introduced as the images stream through the iii DWT module and the amount of locally stored image data, is a function of the image and tile size. For an n1 &times / n2 size image processed using (n1/k1) &times / (n2/k2) sized tiles the latency is equal to the time elapsed to accumulate a (1/k1) portion of one image. In addition, a (2/k1) portion of each image is buffered locally. The proposed hardware has been implemented on an FPGA and is part of a JPEG 2000 compression system designed as a payload for a low earth orbit (LEO) micro-satellite to be launched in September 2003. The architecture can achieve a throughput of up to 160Mbit/s. The latency introduced is 0.105 sec (6.25% of total transmission time) for tile sizes of 256&times / 256. The local storage size required for the tiling operation is 2 MB. The internal storage requirement is 1536 pixels. Equivalent gate count for the design is 292,447.
274

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
275

Speckle Noise Reduction via Homomorphic Elliptical Threshold Rotations in the Complex Wavelet Domain

Ng, Edmund January 2005 (has links)
Many clinicians regard speckle noise as an undesirable artifact in ultrasound images masking the underlying pathology within a patient. Speckle noise is a random interference pattern formed by coherent radiation in a medium containing many sub-resolution scatterers. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. Studies have shown that the presence of speckle noise can reduce a physician's ability to detect lesions by a factor of eight. Without speckle, small high-contrast targets, low contrast objects, and image texture can be deduced quite readily. <br /><br /> Speckle filtering of medical ultrasound images represents a critical pre-processing step, providing clinicians with enhanced diagnostic ability. Efficient speckle noise removal algorithms may also find applications in real time surgical guidance assemblies. However, it is vital that regions of interests are not compromised during speckle removal. This research pertains to the reduction of speckle noise in ultrasound images while attempting to retain clinical regions of interest. <br /><br /> Recently, the advance of wavelet theory has lead to many applications in noise reduction and compression. Upon investigation of these two divergent fields, it was found that the speckle noise tends to rotate an image's homomorphic complex-wavelet coefficients. This work proposes a new speckle reduction filter involving a counter-rotation of these complex-wavelet coefficients to mitigate the presence of speckle noise. Simulations suggest the proposed denoising technique offers superior visual quality, though its signal-to-mean-square-error ratio (S/MSE) is numerically comparable to adaptive frost and kuan filtering. <br /><br /> This research improves the quality of ultrasound medical images, leading to improved diagnosis for one of the most popular and cost effective imaging modalities used in clinical medicine.
276

Spatial and Temporal Image Prediction with Magnitude and Phase Representations

January 2011 (has links)
In this dissertation, I develop the theory and techniques for spatial and temporal image prediction with the magnitude and phase representation of the Complex Wavelet Transform (CWT) or the over-complete DCT to solve the problems of image inpainting and motion compensated inter-picture prediction. First, I develop the theory and algorithms of image reconstruction from the analytic magnitude or phase of the CWT. I prove the conditions under which a signal is uniquely specified by its analytic magnitude or phase, propose iterative algorithms for the reconstruction of a signal from its analytic CWT magnitude or phase, and analyze the convergence of the proposed algorithms. Image reconstruction from the magnitude and pseudo-phase of the over-complete DCT is also discussed and demonstrated. Second, I propose simple geometrical models of the CWT magnitude and phase to describe edges and structured textures and develop a spatial image prediction (inpainting) algorithm based on those models and the iterative image reconstruction mentioned above. Piecewise smooth signals, structured textures and their mixtures can be predicted successfully with the proposed algorithm. Simulation results show that the proposed algorithm achieves appealing visual quality with low computational complexity. Finally, I propose a novel temporal (inter-picture) image predictor for hybrid video coding. The proposed predictor enables successful predictive coding during fades, blended scenes, temporally decorrelated noise, and many other temporal evolutions that are beyond the capability of the traditional motion compensated prediction methods. The proposed predictor estimates the transform magnitude and phase of the desired motion compensated prediction by exploiting the temporal and spatial correlations of the transform coefficients. For the case of implementation in standard hybrid video coders, the over-complete DCT is chosen over the CWT. Better coding performance is achieved with the state-of-the-art H.264/AVC video encoder equipped with the proposed predictor. The proposed predictor is also successfully applied to image registration.
277

Application of Noise Invalidation Denoising in MRI

Elahi, Pegah January 2012 (has links)
Magnetic Resonance Imaging (MRI) is a common medical imaging tool that have beenused in clinical industry for diagnostic and research purposes. These images are subjectto noises while capturing the data that can eect the image quality and diagnostics.Therefore, improving the quality of the generated images from both resolution andsignal to noise ratio (SNR) perspective is critical. Wavelet based denoising technique isone of the common tools to remove the noise in the MRI images. The noise is eliminatedfrom the detailed coecients of the signal in the wavelet domain. This can be done byapplying thresholding methods. The main task here is to nd an optimal threshold andkeep all the coecients larger than this threshold as the noiseless ones. Noise InvalidationDenoising technique is a method in which the optimal threshold is found by comparingthe noisy signal to a noise signature (function of noise statistics). The original NIDeapproach is developed for one dimensional signals with additive Gaussian noise. In thiswork, the existing NIDe approach has been generalized for applications in MRI imageswith dierent noise distribution. The developed algorithm was tested on simulated datafrom the Brainweb database and compared with the well-known Non Local Mean lteringmethod for MRI. The results indicated better detailed structural preserving forthe NIDe approach on the magnitude data while the signal to noise ratio is compatible.The algorithm shows an important advantageous which is less computational complexitythan the NLM method. On the other hand, the Unbiased NLM technique is combinedwith the proposed technique, it can yield the same structural similarity while the signalto noise ratio is improved.
278

Speckle Noise Reduction via Homomorphic Elliptical Threshold Rotations in the Complex Wavelet Domain

Ng, Edmund January 2005 (has links)
Many clinicians regard speckle noise as an undesirable artifact in ultrasound images masking the underlying pathology within a patient. Speckle noise is a random interference pattern formed by coherent radiation in a medium containing many sub-resolution scatterers. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. Studies have shown that the presence of speckle noise can reduce a physician's ability to detect lesions by a factor of eight. Without speckle, small high-contrast targets, low contrast objects, and image texture can be deduced quite readily. <br /><br /> Speckle filtering of medical ultrasound images represents a critical pre-processing step, providing clinicians with enhanced diagnostic ability. Efficient speckle noise removal algorithms may also find applications in real time surgical guidance assemblies. However, it is vital that regions of interests are not compromised during speckle removal. This research pertains to the reduction of speckle noise in ultrasound images while attempting to retain clinical regions of interest. <br /><br /> Recently, the advance of wavelet theory has lead to many applications in noise reduction and compression. Upon investigation of these two divergent fields, it was found that the speckle noise tends to rotate an image's homomorphic complex-wavelet coefficients. This work proposes a new speckle reduction filter involving a counter-rotation of these complex-wavelet coefficients to mitigate the presence of speckle noise. Simulations suggest the proposed denoising technique offers superior visual quality, though its signal-to-mean-square-error ratio (S/MSE) is numerically comparable to adaptive frost and kuan filtering. <br /><br /> This research improves the quality of ultrasound medical images, leading to improved diagnosis for one of the most popular and cost effective imaging modalities used in clinical medicine.
279

Complex-Wavelet Structural Similarity Based Image Classification

Gao, Yang January 2012 (has links)
Complex wavelet structural similarity (CW-SSIM) index has been recognized as a novel image similarity measure of broad potential applications due to its robustness to small geometric distortions such as translation, scaling and rotation of images. Nevertheless, how to make the best use of it in image classification problems has not been deeply investi- gated. In this study, we introduce a series of novel image classification algorithms based on CW-SSIM and use handwritten digit and face image recognition as examples for demonstration, including CW-SSIM based nearest neighbor method, CW-SSIM based k means method, CW-SSIM based support vector machine method (SVM) and CW-SSIM based SVM using affinity propagation. Among the proposed approaches, the best compromise between accuracy and complexity is obtained by the CW-SSIM support vector machine algorithm, which combines an unsupervised clustering method to divide the training images into clusters with representative images and a supervised learning method based on support vector machines to maximize the classification accuracy. Our experiments show that such a conceptually simple image classification method, which does not involve any registration, intensity normalization or sophisticated feature extraction processes, and does not rely on any modeling of the image patterns or distortion processes, achieves competitive performance with reduced computational cost.
280

Hardware implementation of daubechies wavelet transforms using folded AIQ mapping

Islam, Md Ashraful 22 September 2010 (has links)
The Discrete Wavelet Transform (DWT) is a popular tool in the field of image and video compression applications. Because of its multi-resolution representation capability, the DWT has been used effectively in applications such as transient signal analysis, computer vision, texture analysis, cell detection, and image compression. Daubechies wavelets are one of the popular transforms in the wavelet family. Daubechies filters provide excellent spatial and spectral locality-properties which make them useful in image compression.<p> In this thesis, we present an efficient implementation of a shared hardware core to compute two 8-point Daubechies wavelet transforms. The architecture is based on a new two-level folded mapping technique, an improved version of the Algebraic Integer Quantization (AIQ). The scheme is developed on the factorization and decomposition of the transform coefficients that exploits the symmetrical and wrapping structure of the matrices. The proposed architecture is parallel, pipelined, and multiplexed. Compared to existing designs, the proposed scheme reduces significantly the hardware cost, critical path delay and power consumption with a higher throughput rate.<p> Later, we have briefly presented a new mapping scheme to error-freely compute the Daubechies-8 tap wavelet transform, which is the next transform of Daubechies-6 in the Daubechies wavelet series. The multidimensional technique maps the irrational transformation basis coefficients with integers and results in considerable reduction in hardware and power consumption, and significant improvement in image reconstruction quality.

Page generated in 0.0504 seconds