Spelling suggestions: "subject:"wavelet coefficients"" "subject:"wavelet eoefficients""
1 |
Wavelet Image Compressor - MinimageGu, Hao, Hong, Don, Barrett, Martin 01 January 2003 (has links)
Nowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. Minimage, the major topic of this paper, is an application that compresses still images by wavelets. Minimage is used to compress grayscale images and true color images. It implements the wavelet transform to code standard BMP image files to LET wavelet image files, which is defined in Minimage. The code is written in C++ on the Microsoft Windows NT platform. This paper illustrates the design and implementation details in MinImage according to the image compression stages. First, the preprocessor generates the wavelet transform blocks. Second, the basic wavelet decomposition is applied to transform the image data to the wavelet coefficients. The discrete wavelet transforms are the kernel component of MinImage and are discussed in detail. The different wavelet transforms can be plugged in to extend the functionality of MinImage. The third step is the quantization. The standard scalar quantization algorithm and the optimized quantization algorithm, as well as the dequantization, are described. The last part of MinImage is the entropy-coding schema. The reordering of the coefficients based on the Peano Curve and the different entropy coding methods are discussed. This paper also gives the specification of the wavelet compression parameters adjusted by the end user. The interface, parameter specification, and analysis of MinImage are shown in the final appendix.
|
2 |
An HMM-based segmentation method for traffic monitoring moviesKato, Jien, Watanabe, Toyohide, Joga, Sebastien, Jens, Rittscher, Andrew, Blake, 加藤, ジェーン, 渡邉, 豊英 09 1900 (has links)
No description available.
|
3 |
Model-Based Clustering for Gene Expression and Change PatternsJan, Yi-An 29 July 2011 (has links)
It is important to study gene expression and change patterns over a time period because biologically related gene groups are likely to share similar patterns. In this study, similar gene expression and change patterns are found via model-based clustering method. Fourier and wavelet coefficients of gene expression data are used as the clustering variables. A two-stage model-based method is proposed for stepwise clustering of expression and change patterns. Simulation study is performed to investigate the effectiveness of the proposed methodology. Yeast cell cycle data are analyzed.
|
4 |
Potlačování šumu v řeči založené na waveletové transformaci a rozeznávání znělosti segmentů / Speech denoising based on wavelet transform and voice recognition in segmentsChrápek, Tomáš January 2008 (has links)
The wavelet transform is a modern signal processing tool. The wavelet transform earned itself a great success mainly for its unique properties, such as the capability of recognizing very fast changes in processed signal. The theoretical part of this work is introduction to wavelet theory, more specifically wavelet types, a wavelet transform and its application in systems dealing with signal denoising. A main problem connected to speech signals denoising was introduced. The problem is degradation of the speech signal when denoising unvoiced parts. It is because of the fact that unvoiced parts and noise itself have very similar characteristics. The solution would be to apply different attitude to voiced and unvoiced segments of the speech. The main goal of this diploma thesis was to create an application implementing the speech signal denoising using the wavelet transform. The special attention should have been paid to applying different attitude to voiced and unvoiced segments of the speech. The demanded application is programmed as a grafical user interface (GUI) in MATLAB environment. The algorithm implemented in this form allows users to test introduced procedures with a great comfort. This work presents achieved results and discusses them considering general requirements posed on an application of given type. The most important conlusion of this Diploma Thesis is the fact that some kind of trade-off between sufficient signal denoising and keeping the speech understandable has to be made.
|
5 |
Digitální hudební efekt založený na waveletové transformaci jako plug-in modul / Digital musical effect as a plug-in module based on wavelet transformKonczi, Róbert January 2011 (has links)
This work deals with theory of wavelet transform and Mallat’s algorithm. It also includes the programming method of creating VST plug-in modules and describes the developement of the plug-in module, witch uses the modificated coeficients of wavelet transform to applicate the music effect.
|
6 |
Estudo de Fractalidade e Evolu??o Din?mica de Sistemas ComplexosMorais, Edemerson Solano Batista de 28 December 2007 (has links)
Made available in DSpace on 2015-03-03T15:16:22Z (GMT). No. of bitstreams: 1
EdemersonSBM.pdf: 812078 bytes, checksum: 167690407a20b9462083f00be2b0a159 (MD5)
Previous issue date: 2007-12-28 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / In this work, the study of some complex systems is done with use of two distinct procedures. In the first part, we have studied the usage of Wavelet transform on analysis and
characterization of (multi)fractal time series. We have test the reliability of Wavelet Transform Modulus Maxima method (WTMM) in respect to the multifractal formalism, trough the calculation of the singularity spectrum of time series whose fractality is well known a priori. Next, we have use the Wavelet Transform Modulus Maxima method to study the fractality of lungs crackles sounds, a biological time series. Since the crackles sounds are due to the opening of a pulmonary airway bronchi, bronchioles and alveoli which was initially closed, we can get information on the phenomenon of the airway opening cascade of the whole lung. Once this phenomenon is associated with the pulmonar
tree architecture, which displays fractal geometry, the analysis and fractal characterization of this noise may provide us with important parameters for comparison between healthy
lungs and those affected by disorders that affect the geometry of the tree lung, such as the obstructive and parenchymal degenerative diseases, which occurs, for example, in pulmonary emphysema. In the second part, we study a site percolation model for square lattices, where the
percolating cluster grows governed by a control rule, corresponding to a method of automatic search. In this model of percolation, which have characteristics of self-organized
criticality, the method does not use the automated search on Leaths algorithm. It uses the following control rule: pt+1 = pt + k(Rc ? Rt), where p is the probability of percolation,
k is a kinetic parameter where 0 < k < 1 and R is the fraction of percolating finite square lattices with side L, LxL. This rule provides a time series corresponding to the dynamical evolution of the system, in particular the likelihood of percolation p. We proceed an analysis of scaling of the signal obtained in this way. The model used here enables the study of the automatic search method used for site percolation in square lattices, evaluating the dynamics of their parameters when the system goes to the critical point. It shows that the scaling of , the time elapsed until the system reaches the critical point, and tcor, the time required for the system loses its correlations, are both inversely proportional to k, the kinetic parameter of the control rule. We verify yet that the system has two different time scales after: one in which the system shows noise of type 1 f , indicating to be strongly correlated. Another in which it shows white noise, indicating that the correlation is lost. For large intervals of time the dynamics of the system shows ergodicity / Neste trabalho, o estudo de alguns sistemas complexos ? feito com a utiliza??o de dois procedimentos distintos. Na primeira parte, estudamos a utiliza??o da transformada Wavelet na an?lise e caracteriza??o
(multi)fractal de s?ries temporais. Testamos a confiabilidade do M?todo
do M?ximo do M?dulo da Transformada Wavelet (MMTW) com rela??o ao formalismo multifractal, por meio da obten??o do espectro de singularidade de s?ries temporais cuja fractalidade ? bem conhecida a priori. A seguir, usamos o m?todo do m?ximo do m?dulo da transformada wavelet para estudar a fractalidade dos ru?dos de crepita??o pulmonar, uma s?rie temporal biol?gica. Uma vez que a crepita??o pulmonar se d? no momento da abertura de uma via a?rea ? br?nquios, bronqu?olos e alv?olos ? que estava inicialmente
fechada, podemos obter informa??es sobre o fen?meno de abertura em cascata das vias a?reas de todo o pulm?o. Como este fen?meno est? associado ? arquitetura da ?rvore pulmonar, a qual apresenta geometria fractal, a an?lise e caracteriza??o da fractalidade desse ru?do pode nos fornecer importantes par?metros de compara??o entre pulm?es sadios
e aqueles acometidos por patologias que alteram a geometria da ?rvore pulmonar, tais como as doen?as obstrutivas e as de degenera??o parenquimatosa, que ocorre, por exemplo, no enfisema pulmonar.
Na segunda parte, estudamos um modelo de percola??o por s?tios em rede quadrada, onde o aglomerado de percola??o cresce governado por uma regra de controle, correspondendo a um m?todo de busca autom?tica. Neste modelo de percola??o, que apresenta caracter?sticas de criticalidade auto-organizada, o m?todo de busca autom?tica n?o usa o algoritmo de Leath. Usa-se a seguinte regra de controle: pt+1 = pt +k(Rc ?Rt), onde p ? a probabilidade de percola??o, k ? um par?metro cin?tico onde 0 < k < 1 e R ? a fra??o de redes quadradas finitas de lado L, LxL, percolantes. Esta regra fornece uma s?rie temporal correspondente ? evolu??o din?mica do sistema, em especial da probabilidade de percola??o p. ? feita uma an?lise de escalas do sinal assim obtido. O modelo aqui utilizado permite que o m?todo de busca autom?tica para a percola??o por s?tios em rede quadrada seja, per si, estudado, avaliando-se a din?mica dos seus par?metros quando o
sistema se aproxima do ponto cr?tico. Verifica-se que os escalonamentos de ?, o tempo decorrido at? que o sistema chegue ao ponto cr?tico, e de tcor, o tempo necess?rio para que o sistema perca suas correla??es, s?o, ambos, inversamente proporcionais a k, o par?metro
cin?tico da regra de controle. Verifica-se ainda que o sistema apresenta duas escalas temporais distintas depois de ? : uma em que o sistema mostra ru?do do tipo 1 f? , indicando ser fortemente correlacionado; outra em que aparece um ru?do branco, indicando que se perdeu a correla??o. Para grandes intervalos de tempo a din?mica do sistema mostra que
ele se comporta como um sistema erg?dico
|
7 |
Compressed Domain Processing of MPEG AudioAnantharaman, B 03 1900 (has links)
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are
a) Extracting wavelet coefficients in the MPEG compressed domain.
b) Wavelet based pitch extraction in MPEG compressed domain.
c) Time Scale Modifications of MPEG audio.
d) Watermarking of MPEG audio.
The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients.
Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported.
In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals.
The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
|
8 |
Multiresolution analysis of ultrasound images of the prostateZhao, Fangwei January 2004 (has links)
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
|
9 |
Traçage de contenu vidéo : une méthode robuste à l’enregistrement en salle de cinéma / Towards camcorder recording robust video fingerprintingGarboan, Adriana 13 December 2012 (has links)
Composantes sine qua non des contenus multimédias distribués et/ou partagés via un réseau, les techniques de fingerprinting permettent d'identifier tout contenu numérique à l'aide d'une signature (empreinte) de taille réduite, calculée à partir des données d'origine. Cette signature doit être invariante aux transformations du contenu. Pour des vidéos, cela renvoie aussi bien à du filtrage, de la compression, des opérations géométriques (rotation, sélection de sous-région… ) qu'à du sous-échantillonnage spatio-temporel. Dans la pratique, c'est l'enregistrement par caméscope directement dans une salle de projection qui combine de façon non linéaire toutes les transformations pré-citées.Par rapport à l'état de l'art, sous contrainte de robustesse à l'enregistrement en salle de cinéma, trois verrous scientifiques restent à lever : (1) unicité des signatures, (2) appariement mathématique des signatures, (3) scalabilité de la recherche au regard de la dimension de la base de données.La principale contribution de cette thèse est de spécifier, concevoir, implanter et valider TrackART, une nouvelle méthode de traçage des contenus vidéo relevant ces trois défis dans un contexte de traçage de contenus cinématographiques.L'unicité de la signature est obtenue par sélection d'un sous-ensemble de coefficients d'ondelettes, selon un critère statistique de leurs propriétés. La robustesse des signatures aux distorsions lors de l'appariement est garantie par l'introduction d'un test statistique Rho de corrélation. Enfin, la méthode développée est scalable : l'algorithme de localisation met en œuvre une représentation auto-adaptative par sac de mots visuels. TrackART comporte également un mécanisme de synchronisation supplémentaire, capable de corriger automatiquement le jitter introduit par les attaques de désynchronisation variables en temps.La méthode TrackART a été validée dans le cadre d'un partenariat industriel, avec les principaux professionnels de l'industrie cinématographique et avec le concours de la Commission Technique Supérieure de l'Image et du Son. La base de données de référence est constituée de 14 heures de contenu vidéo. La base de données requête correspond à 25 heures de contenu vidéo attaqué, obtenues en appliquant neuf types de distorsion sur le tiers des vidéo de la base de référence.Les performances de la méthode TrackART ont été mesurées objectivement dans un contexte d'enregistrement en salle : la probabilité de fausse alarme est inférieure à 16*10^-6, la probabilité de perte inférieure à 0,041, la précision et le rappel sont égal à 93%. Ces valeurs représentent une avancée par rapport à l'état de l'art qui n'exhibe aucune méthode de traçage robuste à l'enregistrement en salle et valident une première preuve de concept de la méthodologie statistique développée. / Sine qua non component of multimedia content distribution on the Internet, video fingerprinting techniques allow the identification of content based on digital signatures(fingerprints) computed from the content itself. The signatures have to be invariant to content transformations like filtering, compression, geometric modifications, and spatial-temporal sub-sampling/cropping. In practice, all these transformations are non-linearly combined by the live camcorder recording use case.The state-of-the-art limitations for video fingerprinting can be identified at three levels: (1) the uniqueness of the fingerprint is solely dealt with by heuristic procedures; (2) the fingerprinting matching is not constructed on a mathematical ground, thus resulting in lack of robustness to live camcorder recording distortions; (3) very few, if any, full scalable mono-modal methods exist.The main contribution of the present thesis is to specify, design, implement and validate a new video fingerprinting method, TrackART, able to overcome these limitations. In order to ensure a unique and mathematical representation of the video content, the fingerprint is represented by a set of wavelet coefficients. In order to grant the fingerprints robustness to the mundane or malicious distortions which appear practical use-cases, the fingerprint matching is based on a repeated Rho test on correlation. In order to make the method efficient in the case of large scale databases, a localization algorithm based on a bag of visual words representation (Sivic and Zisserman, 2003) is employed. An additional synchronization mechanism able to address the time-variants distortions induced by live camcorder recording was also designed.The TrackART method was validated in industrial partnership with professional players in cinematography special effects (Mikros Image) and with the French Cinematography Authority (CST - Commision Supérieure Technique de l'Image et du Son). The reference video database consists of 14 hours of video content. The query dataset consists in 25 hours of replica content obtained by applying nine types of distortions on a third of the reference video content. The performances of the TrackART method have been objectively assessed in the context of live camcorder recording: the probability of false alarm lower than 16 10-6, the probability of missed detection lower than 0.041, precision and recall equal to 0.93. These results represent an advancement compared to the state of the art which does not exhibit any video fingerprinting method robust to live camcorder recording and validate a first proof of concept for the developed statistical methodology.
|
10 |
EEG Data acquisition and automatic seizure detection using wavelet transforms in the newborn EEG.Zarjam, Pega January 2003 (has links)
This thesis deals with the problem of newborn seizre detection from the Electroencephalogram (EEG) signals. The ultimate goal is to design an automated seizure detection system to assist the medical personnel in timely seizure detection. Seizure detection is vital as neurological diseases or dysfunctions in newborn infants are often first manifested by seizure and prolonged seizures can result in impaired neuro-development or even fatality. The EEG has proved superior to clinical examination of newborns in early detection and prognostication of brain dysfunctions. However, long-term newborn EEG signals acquisition is considerably more difficult than that of adults and children. This is because, the number of the electrodes attached to the skin is limited by the size of the head, the newborns EEGs vary from day to day, and the newborns are reluctant of being in the recording situation. Also, the movement of the newborn can create artifact in the recording and as a result strongly affect the electrical seizure recognition. Most of the existing methods for neonates are either time or frequency based, and, therefore, do not consider the non-stationarity nature of the EEG signal. Thus, notwithstanding the plethora of existing methods, this thesis applies the discrete wavelet transform (DWT) to account for the non-stationarity of the EEG signals. First, two methods for seizure detection in neonates are proposed. The detection schemes are based on observing the changing behaviour of a number of statistical quantities of the wavelet coefficients (WC) of the EEG signal at different scales. In the first method, the variance and mean of the WC are considered as a feature set to dassify the EEG data into seizure and non-seizure. The test results give an average seizure detection rate (SDR) of 97.4%. In the second method, the number of zero-crossings, and the average distance between adjacent extrema of the WC of certain scales are extracted to form a feature set. The test obtains an average SDR of 95.2%. The proposed feature sets are both simple to implement, have high detection rate and low false alarm rate. Then, in order to reduce the complexity of the proposed schemes, two optimising methods are used to reduce the number of selected features. First, the mutual information feature selection (MIFS) algorithm is applied to select the optimum feature subset. The results show that an optimal subset of 9 features, provides SDR of 94%. Compared to that of the full feature set, it is clear that the optimal feature set can significantly reduce the system complexity. The drawback of the MIFS algorithm is that it ignores the interaction between features. To overcome this drawback, an alternative algorithm, the mutual information evaluation function (MIEF) is then used. The MIEF evaluates a set of candidate features extracted from the WC to select an informative feature subset. This function is based on the measurement of the information gain and takes into consideration the interaction between features. The performance of the proposed features is evaluated and compared to that of the features obtained using the MIFS algorithm. The MIEF algorithm selected the optimal 10 features resulting an average SDR of 96.3%. It is also shown, an average SDR of 93.5% can be obtained with only 4 features when the MIEF algorithm is used. In comparison with results of the first two methods, it is shown that the optimal feature subsets improve the system performance and significantly reduce the system complexity for implementation purpose.
|
Page generated in 0.059 seconds