• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 33
  • 33
  • 29
  • 11
  • 8
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Decomposição de potenciais evocados auditivos do tronco encefálico por meio de classificador probabilístico adaptativo

Naves, Kheline Fernandes Peres 18 January 2013 (has links)
The Auditory Brainstem Respose signals are characteristic of the combination of neural activity responses in presence of sound stimuli, detected by the cortex and characterized by peaks and valleys. They are named by roman numerals (I, II, III, IV, V, VI and VII). The identification of these peaks is made by the classic manual process of analysis, which is based on the visualization of the signal generated by the sum of each sample. In the sum the morphological characteristics of the signal and the temporal aspects relevant waves made by Jewett are identified. However, in this visual process some difficulties may occur, regarding the recognition of patterns present, which may vary according to local, individual equipment and settings in the selected protocol. Making the analysis of ABR subject to the influence of many variables and a constant source of doubt about the reliability and agreement between examiners. In order to create a system to automatic detection of these peaks and self-learning, that takes into account the profile for evaluate from examiners this work was developed. The continuous wavelet transforms an innovative technique for the detection of peaks was used associate with a probabilistic model for classification based on the histograms with information provide by examiners. In evaluating of the system, based on the swat rate between the system and a manual technique an accuracy ranging for 74.3% to 99.7%, according to each waves. Thus the proposed technique is proved to be accurate especially in ABR that is a sign of low amplitude. / Os PEATE são sinais resultantes da combinação de respostas de atividades neurais a estímulos sonoros, detectados sobre o córtex, que se caracterizam por vales e picos, sendo nomeados por algarismos romanos (I, II, III, IV, V, VI e VII). O processo clássico de identificação desses picos é baseado na visualização do sinal gerado pela somatória de cada uma de suas componentes. Nele são identificadas as características morfológicas do sinal e os aspectos temporais relevantes constituídos pelas ondas de Jewett. No entanto, neste processo de identificação visual surgem dificuldades que tornam a análise visual dos PEATE uma fonte constante de dúvidas em relação à fidedignidade e concordância entre os examinadores. Com o objetivo de melhorar o processo de avaliação dos PEATE, foi desenvolvido um sistema de detecção automática para os picos, com capacidade de aprendizado que leva em consideração o perfil de marcação realizado por examinadores. Para a detecção de picos foi utilizada a Transformada Wavelet Contínua associado a mesma foi desenvolvido um classificador probabilístico baseado nos histogramas gerados a partir de marcações realizadas pelos profissionais. Na avaliação do sistema proposto, com base na taxa de acerto entre o sistema e a marcação manual, o mesmo apresentou uma acurácia variando de 74,3% a 99,7%, dependendo do tipo de onda analisada. Assim a técnica proposta se revela precisa, principalmente na presença de ruído característico de sinais biológicos, especialmente no PEATE, que é um sinal de amplitude baixa. / Doutor em Ciências
22

A Model Study For The Application Of Wavelet And Neural Network For Identification And Localization Of Partial Discharges In Transformers

Vaidya, Anil Pralhad 10 1900 (has links) (PDF)
No description available.
23

Nonlinear dynamics of microcirculation and energy metabolism for the prediction of cardiovascular risk

Smirni, Salvatore January 2018 (has links)
The peripheral skin microcirculation reflects the overall health status of the cardiovascular system and can be examined non-invasively by laser methods to assess early cardiovascular disease (CVD) risk factors, i.e. oxidative stress and endothelial dysfunction. Examples of methods used for this task are the laser Doppler flowmetry (LDF) and laser fluorescence spectroscopy (LFS), which respectively allow tracing blood flow and the amounts of the coenzyme NAD(P)H (nicotamide adenine dinucleotide) that is involved in the cellular production of ATP (adenosine triphosphate) energy. In this work, these methods were combined with iontophoresis and PORH (post-occlusive reactive hyperaemia) reactive tests to assess skin microvascular function and oxidative stress in mice and human subjects. The main focus of the research was exploring the nonlinear dynamics of skin LDF and NAD(P)H time series by processing the signals with the wavelet transform analysis. The study of nonlinear fluctuations of the microcirculation and cell energy metabolism allows detecting dynamic oscillators reflecting the activity of microvascular factors (i.e. endothelial cells, smooth muscle cells, sympathetic nerves) and specific patterns of mitochondrial or glycolytic ATP production. Monitoring these dynamic factors is powerful for the prediction of general vascular/metabolic health conditions, and can help the study of the mechanisms at the basis of the rhythmic fluctuations of micro-vessels diameter (vasomotion). In this thesis, the microvascular and metabolic dynamic biomarkers were characterised <i>in-vivo</i> in a mouse model affected by oxidative stress and a human cohort of smokers. Data comparison, respectively, with results from control mice and non-smokers, revealed significant differences suggesting the eligibility of these markers as predictors of risk associated with oxidative stress and smoke. Moreover, a relevant link between microvascular and metabolic oscillators was observed during vasomotion induced by α-adrenergic (in mice) or PORH (in humans) stimulations, suggesting a possible role of cellular Ca<sup>2+ </sup>oscillations of metabolic origin as drivers of vasomotion which is a theory poorly explored in literature. As future perspective, further exploration of these promising nonlinear biomarkers is required in the presence of risk factors different from smoke or oxidative stress and during vasomotion induced by stimuli different from PORH or α-adrenergic reactive challenges, to obtain a full picture on the use of these factors as predictors of risk and their role in the regulation of vasomotion.
24

What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis

Maraun, Douglas January 2006 (has links)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate.<br><br> Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. <br><br> This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. <br><br> First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. <br><br> In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. <br><br> In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined. / Seit der Erfindung des Thermometers durch Galileo Galilei versuchen Forscher mit naturwissenschaftlichen Methoden die komplexen Zusammenhänge in der Atmosphäre und den Ozeanen zu entschlüsseln. Sie beobachten die Natur und stellen Theorien über das Klimasystem auf. Seit wenigen Jahrzehnten werden sie dabei von immer leistungsfähigeren Computern unterstützt, die das Klima der Erdgeschichte und der nahen Zukunft simulieren. <br><br> Die Verbindung aus den Beobachtungen und den Modellen versucht die Zeitreihen­analyse herzustellen: Aus den Daten werden mit statistischen Methoden charak­teristische Eigenschaften der zugrundeliegenden klimatologischen Prozesse geschätzt, die dann in die Modelle einfliessen können. Die Bewertung solch einer Schätzung, die stets Messfehlern und Vereinfachungen des Modells unterworfen ist, erfolgt statistisch entweder mittels Konfidenzintervallen oder Signifikanztests. Solche Tests sollen auf der einen Seite charakteristische Eigenschaften in den Daten erkennen können, d.h. sie sollen sensitiv sein. Auf der anderen Seite sollen sie jedoch auch keine Eigenschaften vortäuschen, d.h. sie sollen spezifisch sein. Für die vertrauenswürdige Untermauerung einer Hypothese ist also ein spezifischer Test erforderlich. <br><br> Die vorliegende Arbeit untersucht verschiedene Methoden der Zeitreihenanalyse, erweitert sie gegebenenfalls und wendet sie auf typische klimatologische Frage­stellungen an. Besonderes Augenmerk wird dabei auf die Spezifizität der jeweiligen Methode gelegt; die Grenzen möglicher Folgerungen mittels Datenanalyse werden diskutiert.<br><br> Im ersten Teil der Arbeit wird studiert, wie und ob sich mithilfe der sogenannten trendbereinigenden Fluktuationsanalyse aus Temperaturzeitreihen ein sogenanntes langes Gedächtnis der zugrundeliegenden Prozesse herleiten lässt. Solch ein Gedächtnis bedeutet, dass der Prozess seine Vergangenheit nie vergisst, mit fundamentalen Auswirkungen auf die gesamte statistische Beurteilung des Klimasystems. Diese Arbeit konnte jedoch zeigen, dass die Analysemethode vollkommen unspezifisch ist und die Hypothese “Langes Gedächtnis” gar nicht abgelehnt werden kann. <br><br> Im zweiten Teil werden zunächst Mängel einer sehr populären Analysemethode, der sogenannten kontinuierlichen Waveletspetralanalyse diskutiert. Diese Methode schätzt die Variabilität eines Prozesses auf verschiedenen Schwingungsperioden zu bestimm­ten Zeiten. Ein wichtiger Nachteil der bisherigen Methodik sind auch hier unspezi­fische Signifikanztests. Ausgehend von der Diskussion wird eine Theorie der Wavelet­spektralanalyse entwickelt, die ein breites Feld an neuen Anwendungen öffnet. Darauf basierend werden spezifische Signifikanztests konstruiert.<br><br> Im letzten Teil der Arbeit wird der Einfluss des El Niño/Southern Oscillation Phäno­mens auf den Indischen Sommermonsun analysiert. Es wird untersucht, ob und wann die Oszillationen beider Phänomene synchron ablaufen. Dazu wird eine etablierte Methode für die speziellen Bedürfnisse der Analyse von typischerweise sehr unregel­mäßigen Klimadaten erweitert. Mittels eines spezifischen Signifikanztests konnten bisherige Ergebnisse mit erhöhter Genauigkeit bestätigt werden. Zusätzlich konnte diese Methode jedoch auch neue Kopplungsintervalle feststellen, die die Hypothese entkräften konnten, dass ein neuerliches Verschwinden der Kopplung ein beisspielloser Vorgang sei. Schliesslich wird eine Hypothese vorgestellt, wie vulkanische Aerosole die Kopplung beeinflussen könnten.
25

Wavelet Based Algorithms For Spike Detection In Micro Electrode Array Recordings

Nabar, Nisseem S 06 1900 (has links)
In this work, the problem of detecting neuronal spikes or action potentials (AP) in noisy recordings from a Microelectrode Array (MEA) is investigated. In particular, the spike detection algorithms should be less complex and with low computational complexity so as to be amenable for real time applications. The use of the MEA is that it allows collection of extracellular signals from either a single unit or multiple (45) units within a small area. The noisy MEA recordings then undergo basic filtering, digitization and are presented to a computer for further processing. The challenge lies in using this data for detection of spikes from neuronal firings and extracting spatiotemporal patterns from the spike train which may allow control of a robotic limb or other neuroprosthetic device directly from the brain. The aim is to understand the spiking action of the neurons, and use this knowledge to devise efficient algorithms for Brain Machine Interfaces (BMIs). An effective BMI will require a realtime, computationally efficient implementation which can be carried out on a DSP board or FPGA system. The aim is to devise algorithms which can detect spikes and underlying spatio-temporal correlations having computational and time complexities to make a real time implementation feasible on a specialized DSP chip or an FPGA device. The time-frequency localization, multiresolution representation and analysis properties of wavelets make them suitable for analysing sharp transients and spikes in signals and distinguish them from noise resembling a transient or the spike. Three algorithms for the detection of spikes in low SNR MEA neuronal recordings are proposed: 1. A wavelet denoising method based on the Discrete Wavelet Transform (DWT) to suppress the noise power in the MEA signal or improve the SNR followed by standard thresholding techniques to detect the spikes from the denoised signal. 2. Directly thresholding the coefficients of the Stationary (Undecimated) Wavelet Transform (SWT) to detect the spikes. 3. Thresholding the output of a Teager Energy Operator (TEO) applied to the signal on the discrete wavelet decomposed signal resulting in a multiresolution TEO framework. The performance of the proposed three wavelet based algorithms in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families in the presence of colored AR(5) (autoregressive model with order 5) and additive white Gaussian noise (AWGN) is evaluated. The performance is further evaluated for the wavelet family chosen under different levels of SNR in the presence of the colored AR(5) and AWGN noise. Chapter 1 gives an introduction to the concept behind Brain Machine Interfaces (BMIs), an overview of their history, the current state-of-the-art and the trends for the future. It also describes the working of the Microelectrode Arrays (MEAs). The generation of a spike in a neuron, the proposed mechanism behind it and its modeling as an electrical circuit based on the Hodgkin-Huxley model is described. An overview of some of the algorithms that have been suggested for spike detection purposes whether in MEA recordings or Electroencephalographic (EEG) signals is given. Chapter 2 describes in brief the underlying ideas that lead us to the Wavelet Transform paradigm. An introduction to the Fourier Transform, the Short Time Fourier Transform (STFT) and the Time-Frequency Uncertainty Principle is provided. This is followed by a brief description of the Continuous Wavelet Transform and the Multiresolution Analysis (MRA) property of wavelets. The Discrete Wavelet Transform (DWT) and its filter bank implementation are described next. It is proposed to apply the wavelet denoising algorithm pioneered by Donoho, to first denoise the MEA recordings followed by standard thresholding technique for spike detection. Chapter 3 deals with the use of the Stationary or Undecimated Wavelet Transform (SWT) for spike detection. It brings out the differences between the DWT and the SWT. A brief discussion of the analysis of non-stationary time series using the SWT is presented. An algorithm for spike detection based on directly thresholding the SWT coefficients without any need for reconstructing the denoised signal followed by thresholding technique as in the first method is presented. In chapter 4 a spike detection method based on multiresolution Teager Energy Operator is discussed. The Teager Energy Operator (TEO) picks up localized spikes in signal energy and thus is directly used for spike detection in many applications including R wave detection in ECG and various (alpha, beta) rhythms in EEG. Some basic properties of the TEO are discussed followed by the need for a multiresolution approach to TEO and the methods existing in literature. The wavelet decomposition and the subsampled signal involved at each level naturally lends it to a multiresolution TEO framework at the same time significantly reducing the computational complexity due the subsampled signal at each level. A wavelet-TEO algorithm for spike detection with similar accuracies as the previous two algorithms is proposed. The method proposed here differs significantly from that in literature since wavelets are used instead of time domain processing. Chapter 5 describes the method of evaluation of the three algorithms proposed in the previous chapters. The spike templates are obtained from MEA recordings, resampled and normalized for use in spike trains simulated as Poisson processes. The noise is modeled as colored autoregressive (AR) of order 5, i.e AR(5), as well as Additive White Gaussian Noise (AWGN). The noise in most human and animal MEA recordings conforms to the autoregressive model with orders of around 5. The AWGN Noise model is used in most spike detection methods in the literature. The performance of the proposed three wavelet based algorithms is measured in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families. The optimal wavelet for this purpose is then chosen from the wavelet family which gives the best results. Also, optimal levels of decomposition and threshold factors are chosen while maintaining a balance between accuracy and false positives. The algorithms are then tested for performance under different levels of SNR with the noise modeled as AR(5) or AWGN. The proposed wavelet based algorithms exhibit a detection accuracy of approximately 90% at a low SNR of 2.35 dB with the false positives below 5%. This constitutes a significant improvement over the results in existing literature which claim an accuracy of 80% with false positives of nearly 10%. As the SNR increases, the detection accuracy increases to close to 100% and the false alarm rate falls to 0. Chapter 6 summarizes the work. A comparison is made between the three proposed algorithms in terms of detection accuracy and false positives. Directions in which future work may be carried out are suggested.
26

Techniques de spectroscopie proche infrarouge et analyses dans le plan temps-fréquence appliquées à l’évaluation hémodynamique du très grand prématuré

Beausoleil, Thierry P. 12 1900 (has links)
No description available.
27

Sledování trendů elektrické aktivity srdce časově-frekvenčním rozkladem / Monitoring Trends of Electrical Activity of the Heart Using Time-Frequency Decomposition

Čáp, Martin January 2009 (has links)
Work is aimed at the time-frequency decomposition of a signal application for monitoring the EKG trend progression. Goal is to create algorithm which would watch changes in the ST segment in EKG recording and its realization in the Matlab program. Analyzed is substance of the origin of EKG and its measuring. For trend calculations after reading the signal is necessary to preprocess the signal, it consists of filtration and detection of necessary points of EKG signal. For taking apart, also filtration and measuring the signal is used wavelet transformation. Source of the data is biomedicine database Physionet. As an outcome of the algorithm are drawn ST segment trends for three recordings from three different patients and its comparison with reference method of ST qualification. For qualification of the heart stability, as a system, where designed methods watching differences in position of the maximal value in two-zone spectrum and the Poincare mapping method. Realized method is attached to this thesis.
28

Rozměřování experimentálních záznamů EKG / Delineation of experimental ECG data

Hejč, Jakub January 2013 (has links)
This thesis deals with a proposition of an algorithm for QRS complex and typical ECG waves boundaries detection. It incorporates a literature research focused on heart electrophysiology and commonly used methods for ECG fiducial points detection and delineation. Out of the presented methods an algorithm based on a continuous wavelet transform is implemented. Detection and delineation algorithm is tested on CSE standard signal database towards references determined both manually and automatically. Obtained results are compared to other congenerous methods. The diploma thesis is further concerned with an algorithm modification for experimental electrocardiograms of isolated rabbit hearts. Recording specifics of these data are introduced. Additionally, based on time and frequency analysis, particular modifications of the algorithm are proposed and realized. Due to a large extent of records functionality is verified on randomly selected database samples. Efficiency of the modified algorithm is evaluated through manually annotated references.
29

Predictability of Nonstationary Time Series using Wavelet and Empirical Mode Decomposition Based ARMA Models

Lanka, Karthikeyan January 2013 (has links) (PDF)
The idea of time series forecasting techniques is that the past has certain information about future. So, the question of how the information is encoded in the past can be interpreted and later used to extrapolate events of future constitute the crux of time series analysis and forecasting. Several methods such as qualitative techniques (e.g., Delphi method), causal techniques (e.g., least squares regression), quantitative techniques (e.g., smoothing method, time series models) have been developed in the past in which the concept lies in establishing a model either theoretically or mathematically from past observations and estimate future from it. Of all the models, time series methods such as autoregressive moving average (ARMA) process have gained popularity because of their simplicity in implementation and accuracy in obtaining forecasts. But, these models were formulated based on certain properties that a time series is assumed to possess. Classical decomposition techniques were developed to supplement the requirements of time series models. These methods try to define a time series in terms of simple patterns called trend, cyclical and seasonal patterns along with noise. So, the idea of decomposing a time series into component patterns, later modeling each component using forecasting processes and finally combining the component forecasts to obtain actual time series predictions yielded superior performance over standard forecasting techniques. All these methods involve basic principle of moving average computation. But, the developed classical decomposition methods are disadvantageous in terms of containing fixed number of components for any time series, data independent decompositions. During moving average computation, edges of time series might not get modeled properly which affects long range forecasting. So, these issues are to be addressed by more efficient and advanced decomposition techniques such as Wavelets and Empirical Mode Decomposition (EMD). Wavelets and EMD are some of the most innovative concepts considered in time series analysis and are focused on processing nonlinear and nonstationary time series. Hence, this research has been undertaken to ascertain the predictability of nonstationary time series using wavelet and Empirical Mode Decomposition (EMD) based ARMA models. The development of wavelets has been made based on concepts of Fourier analysis and Window Fourier Transform. In accordance with this, initially, the necessity of involving the advent of wavelets has been presented. This is followed by the discussion regarding the advantages that are provided by wavelets. Primarily, the wavelets were defined in the sense of continuous time series. Later, in order to match the real world requirements, wavelets analysis has been defined in discrete scenario which is called as Discrete Wavelet Transform (DWT). The current thesis utilized DWT for performing time series decomposition. The detailed discussion regarding the theory behind time series decomposition is presented in the thesis. This is followed by description regarding mathematical viewpoint of time series decomposition using DWT, which involves decomposition algorithm. EMD also comes under same class as wavelets in the consequence of time series decomposition. EMD is developed out of the fact that most of the time series in nature contain multiple frequencies leading to existence of different scales simultaneously. This method, when compared to standard Fourier analysis and wavelet algorithms, has greater scope of adaptation in processing various nonstationary time series. The method involves decomposing any complicated time series into a very small number of finite empirical modes (IMFs-Intrinsic Mode Functions), where each mode contains information of the original time series. The algorithm of time series decomposition using EMD is presented post conceptual elucidation in the current thesis. Later, the proposed time series forecasting algorithm that couples EMD and ARMA model is presented that even considers the number of time steps ahead of which forecasting needs to be performed. In order to test the methodologies of wavelet and EMD based algorithms for prediction of time series with non stationarity, series of streamflow data from USA and rainfall data from India are used in the study. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability by the proposed algorithm is checked in two scenarios, first being six months ahead forecast and the second being twelve months ahead forecast. Normalized Root Mean Square Error (NRMSE) and Nash Sutcliffe Efficiency Index (Ef) are considered to evaluate the performance of the proposed techniques. Based on the performance measures, the results indicate that wavelet based analyses generate good variations in the case of six months ahead forecast maintaining harmony with the observed values at most of the sites. Although the methods are observed to capture the minima of the time series effectively both in the case of six and twelve months ahead predictions, better forecasts are obtained with wavelet based method over EMD based method in the case of twelve months ahead predictions. It is therefore inferred that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm could be used to model events such as droughts with reasonable accuracy. Also, some modifications that could be made in the model have been suggested which can extend the scope of applicability to other areas in the field of hydrology.
30

Komprese signálů EKG s využitím vlnkové transformace / ECG Signal Compression Based on Wavelet Transform

Ondra, Josef January 2008 (has links)
Signal compression is daily-used tool for memory capacities reduction and for fast data communication. Methods based on wavelet transform seem to be very effective nowadays. Signal decomposition with a suitable bank filters following with coefficients quantization represents one of the available technique. After packing quantized coefficients into one sequence, run length coding together with Huffman coding are implemented. This thesis focuses on compression effectiveness for the different wavelet transform and quantization settings.

Page generated in 0.0771 seconds