• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimation and Control of Resonant Systems with Stochastic Disturbances

Nauclér, Peter January 2008 (has links)
<p>The presence of vibration is an important problem in many engineering applications. Various passive techniques have traditionally been used in order to reduce waves and vibrations, and their harmful effects. Passive techniques are, however, difficult to apply in the low frequency region. In addition, the use of passive techniques often involve adding mass to the system, which is undesirable in many applications.</p><p>As an alternative, active techniques can be used to manipulate system dynamics and to control the propagation of waves and vibrations. This thesis deals with modeling, estimation and active control of systems that have resonant dynamics. The systems are exposed to stochastic disturbances. Some of them excite the system and generate vibrational responses and other corrupt measured signals. </p><p>Feedback control of a beam with attached piezoelectrical elements is studied. A detailed modeling approach is described and system identification techniques are employed for model order reduction. Disturbance attenuation of a non-measured variable shows to be difficult. This issue is further analyzed and the problems are shown to depend on fundamental design limitations.</p><p>Feedforward control of traveling waves is also considered. A device with properties analogous to those of an electrical diode is introduced. An `ideal´ feedforward controller based on the mechanical properties of the system is derived. It has, however, poor noise rejection properties and it therefore needs to be modified. A number of feedforward controllers that treat the measurement noise in a statistically sound way are derived.</p><p>Separation of overlapping traveling waves is another topic under investigation. This operation also is sensitive to measurement noise. The problem is thoroughly analyzed and Kalman filtering techniques are employed to derive wave estimators with high statistical performance. </p><p>Finally, a nonlinear regression problem with close connections to unbalance estimation of rotating machinery is treated. Different estimation techniques are derived and analyzed with respect to their statistical accuracy. The estimators are evaluated using the example of separator balancing. </p>
2

Um modelo de reconstrução tomográfica 3D para amostras agrícolas com filtragem de Wiener em processamento paralelo / A 3D Tomographic Reconstruction Model for Agricultural Samples with Wiener Filtering and Parallel Processing

Pereira, Mauricio Fernando Lima 19 June 2007 (has links)
Neste trabalho, é apresentado um novo modelo de reconstrução tridimensional (3D) para amostras agrícolas com filtragem de Wiener em processamento paralelo, o qual é obtido a partir de reconstruções tomográficas bidimensionais (2D). No desenvolvimento, foram modelados algoritmos paralelos de retroprojeção filtrada e reconstrução tridimensional, baseando-se na inserção de um conjunto de planos virtuais entre pares de planos reais obtidos em ensaios tomográficos de raios X na faixa de energia de 56 keV a 662 keV. No modelo, os planos virtuais gerados em algoritmo paralelo são implementados com base na técnica de interpolação por B-Spline-Wavelet. Para validação do modelo desenvolvido, foi utilizada uma plataforma paralela composta de 4 processadores DSP, a qual possibilitou a troca de dados entre os processadores DSP e o envio de informações para o host, um computador desktop com processador Pentium III operando em 800 MHz. A extração de medidas de eficiência, de ganho e de precisão dos algoritmos paralelos foi realizada com base em um conjunto de amostras agrícolas (solo, vidro e madeiras) e de phantoms de calibração. Nessa avaliação, observou-se que o algoritmo de reconstrução 2D, utilizado como base para o algoritmo de reconstrução 3D, possibilitou uma alta eficiência para imagens de maior resolução, atingindo um pico de 92% de eficiência na resolução de 181X181 pixels. O algoritmo paralelo de reconstrução 3D foi analisado para um conjunto de amostras, sob diferentes configurações de planos reais e virtuais, organizados de forma a possibilitarem a avaliação do impacto causado pelo aumento da granularidade da comunicação e da carga de trabalho. Um melhor desempenho, com ganho médio igual a 3,4, foi obtido na reconstrução de objetos que demandaram o cálculo de um maior número de planos. Também, buscou-se conhecer a adaptabilidade do modelo para uso em arquitetura convencional, sendo que neste caso o uso de MPI permitiu a comunicação entre as tarefas projetadas em cada algoritmo paralelo. Adicionamente, foram incluídas ferramentas de visualização 2D e 3D para que usuários possam analisar as imagens e as características das amostras agrícolas em ambiente tridimensional. Os resultados obtidos indicam que o modelo de reconstrução 3D paralela trouxe contribuições originais para a área de tomografia agrícola aplicada à física de solos, bem como para a criação de ferramentas que viabilizem explorar recursos computacionais disponíveis em arquiteturas paralelas que demandem elevada capacidade de processamento. / This work presents a new method for three dimensional (3D) image reconstruction dedicated to the investigation in soil physics by means of X-ray tomography which is obtained using two-dimensional (2D) tomographic image reconstructed slices. The conception of the 3D model for reconstruction and visualization was based on the filtered back projection algorithm, operating under parallel environment together the insertion of virtual planes between pairs of real planes obtained by X-Ray tomography under energies varying from 56 keV to 662 keV. In this model, the virtual planes were generated by interpolation with the use of B-Spline-Wavelets. The evaluation of the 3D reconstruction model was established by using a set of agricultural samples (i.e., soil, glass, wood and calibration phantoms) having different configuration for the planes. Such configuration was based on setting not only the sizes and the number of the real but also the virtual planes in the volume. This procedure allows the impact measurements as a function of the increasing in workload and the communication granularity. To validate the reconstruction model, a dedicated parallel architecture composed of 4 DSP processors was used. This board enables data exchange between DSP processors and communication with host computer. A measurement of efficiency with a speed up equal to 3.4 was obtained using the same set of samples and a better performance was observed with a higher number of planes. Also, to understand about its adaptability, the model was implemented in conventional architecture, using MPI library to enable communication between designed tasks. Additionally, 2D and 3D visualization tools based on Vizualization ToolKit were included in order to help users to analyze images and their characteristics. Results have shown that the 3D parallel model reconstruction brought original contributions for the soil science diagnosis by X-Ray tomography, as well as to explore the available computational resources in parallel architectures, which demands great processing capacity.
3

Um modelo de reconstrução tomográfica 3D para amostras agrícolas com filtragem de Wiener em processamento paralelo / A 3D Tomographic Reconstruction Model for Agricultural Samples with Wiener Filtering and Parallel Processing

Mauricio Fernando Lima Pereira 19 June 2007 (has links)
Neste trabalho, é apresentado um novo modelo de reconstrução tridimensional (3D) para amostras agrícolas com filtragem de Wiener em processamento paralelo, o qual é obtido a partir de reconstruções tomográficas bidimensionais (2D). No desenvolvimento, foram modelados algoritmos paralelos de retroprojeção filtrada e reconstrução tridimensional, baseando-se na inserção de um conjunto de planos virtuais entre pares de planos reais obtidos em ensaios tomográficos de raios X na faixa de energia de 56 keV a 662 keV. No modelo, os planos virtuais gerados em algoritmo paralelo são implementados com base na técnica de interpolação por B-Spline-Wavelet. Para validação do modelo desenvolvido, foi utilizada uma plataforma paralela composta de 4 processadores DSP, a qual possibilitou a troca de dados entre os processadores DSP e o envio de informações para o host, um computador desktop com processador Pentium III operando em 800 MHz. A extração de medidas de eficiência, de ganho e de precisão dos algoritmos paralelos foi realizada com base em um conjunto de amostras agrícolas (solo, vidro e madeiras) e de phantoms de calibração. Nessa avaliação, observou-se que o algoritmo de reconstrução 2D, utilizado como base para o algoritmo de reconstrução 3D, possibilitou uma alta eficiência para imagens de maior resolução, atingindo um pico de 92% de eficiência na resolução de 181X181 pixels. O algoritmo paralelo de reconstrução 3D foi analisado para um conjunto de amostras, sob diferentes configurações de planos reais e virtuais, organizados de forma a possibilitarem a avaliação do impacto causado pelo aumento da granularidade da comunicação e da carga de trabalho. Um melhor desempenho, com ganho médio igual a 3,4, foi obtido na reconstrução de objetos que demandaram o cálculo de um maior número de planos. Também, buscou-se conhecer a adaptabilidade do modelo para uso em arquitetura convencional, sendo que neste caso o uso de MPI permitiu a comunicação entre as tarefas projetadas em cada algoritmo paralelo. Adicionamente, foram incluídas ferramentas de visualização 2D e 3D para que usuários possam analisar as imagens e as características das amostras agrícolas em ambiente tridimensional. Os resultados obtidos indicam que o modelo de reconstrução 3D paralela trouxe contribuições originais para a área de tomografia agrícola aplicada à física de solos, bem como para a criação de ferramentas que viabilizem explorar recursos computacionais disponíveis em arquiteturas paralelas que demandem elevada capacidade de processamento. / This work presents a new method for three dimensional (3D) image reconstruction dedicated to the investigation in soil physics by means of X-ray tomography which is obtained using two-dimensional (2D) tomographic image reconstructed slices. The conception of the 3D model for reconstruction and visualization was based on the filtered back projection algorithm, operating under parallel environment together the insertion of virtual planes between pairs of real planes obtained by X-Ray tomography under energies varying from 56 keV to 662 keV. In this model, the virtual planes were generated by interpolation with the use of B-Spline-Wavelets. The evaluation of the 3D reconstruction model was established by using a set of agricultural samples (i.e., soil, glass, wood and calibration phantoms) having different configuration for the planes. Such configuration was based on setting not only the sizes and the number of the real but also the virtual planes in the volume. This procedure allows the impact measurements as a function of the increasing in workload and the communication granularity. To validate the reconstruction model, a dedicated parallel architecture composed of 4 DSP processors was used. This board enables data exchange between DSP processors and communication with host computer. A measurement of efficiency with a speed up equal to 3.4 was obtained using the same set of samples and a better performance was observed with a higher number of planes. Also, to understand about its adaptability, the model was implemented in conventional architecture, using MPI library to enable communication between designed tasks. Additionally, 2D and 3D visualization tools based on Vizualization ToolKit were included in order to help users to analyze images and their characteristics. Results have shown that the 3D parallel model reconstruction brought original contributions for the soil science diagnosis by X-Ray tomography, as well as to explore the available computational resources in parallel architectures, which demands great processing capacity.
4

Filtrace signálů EKG pomocí vlnkové transformace / Wavelet Filtering of ECG Signal

Slezák, Pavel January 2010 (has links)
The thesis deals with possibilities of using wavelet transform in applications dealing with noise reduction, primarily in the field of ECG signals denoising. We assess the impact of the various filtration parameters setting as the thresholding wavelet coefficients method, thresholds level setting and the selection of decomposition and reconstruction filter banks.. Our results are compared with the results of linear filtering. The results of wavelet Wieners filtration with pilot estimation are described below. Mainly, we tested a combination of decomposition and reconstruction filter banks. All the filtration methods described here are tested on real ECG records with additive myopotential noise character and are implemented in the Matlab environment.
5

Estimation and Control of Resonant Systems with Stochastic Disturbances

Nauclér, Peter January 2008 (has links)
The presence of vibration is an important problem in many engineering applications. Various passive techniques have traditionally been used in order to reduce waves and vibrations, and their harmful effects. Passive techniques are, however, difficult to apply in the low frequency region. In addition, the use of passive techniques often involve adding mass to the system, which is undesirable in many applications. As an alternative, active techniques can be used to manipulate system dynamics and to control the propagation of waves and vibrations. This thesis deals with modeling, estimation and active control of systems that have resonant dynamics. The systems are exposed to stochastic disturbances. Some of them excite the system and generate vibrational responses and other corrupt measured signals. Feedback control of a beam with attached piezoelectrical elements is studied. A detailed modeling approach is described and system identification techniques are employed for model order reduction. Disturbance attenuation of a non-measured variable shows to be difficult. This issue is further analyzed and the problems are shown to depend on fundamental design limitations. Feedforward control of traveling waves is also considered. A device with properties analogous to those of an electrical diode is introduced. An `ideal´ feedforward controller based on the mechanical properties of the system is derived. It has, however, poor noise rejection properties and it therefore needs to be modified. A number of feedforward controllers that treat the measurement noise in a statistically sound way are derived. Separation of overlapping traveling waves is another topic under investigation. This operation also is sensitive to measurement noise. The problem is thoroughly analyzed and Kalman filtering techniques are employed to derive wave estimators with high statistical performance. Finally, a nonlinear regression problem with close connections to unbalance estimation of rotating machinery is treated. Different estimation techniques are derived and analyzed with respect to their statistical accuracy. The estimators are evaluated using the example of separator balancing.
6

A Design And Implementation Of P300 Based Brain-computer Interface

Erdogan, Hasan Balkar 01 September 2009 (has links) (PDF)
In this study, a P300 based Brain-Computer Interface (BCI) system design is realized by the implementation of the Spelling Paradigm. The main challenge in these systems is to improve the speed of the prediction mechanisms by the application of different signal processing and pattern classification techniques in BCI problems. The thesis study includes the design and implementation of a 10 channel Electroencephalographic (EEG) data acquisition system to be practically used in BCI applications. The electrical measurements are realized with active electrodes for continuous EEG recording. The data is transferred via USB so that the device can be operated by any computer. v Wiener filtering is applied to P300 Speller as a signal enhancement tool for the first time in the literature. With this method, the optimum temporal frequency bands for user specific P300 responses are determined. The classification of the responses is performed by using Support Vector Machines (SVM&rsquo / s) and Bayesian decision. These methods are independently applied to the row-column intensification groups of P300 speller to observe the differences in human perception to these two visual stimulation types. It is observed from the investigated datasets that the prediction accuracies in these two groups are different for each subject even for optimum classification parameters. Furthermore, in these datasets, the classification accuracy was improved when the signals are preprocessed with Wiener filtering. With this method, the test characters are predicted with 100% accuracy in 4 trial repetitions in P300 Speller dataset of BCI Competition II. Besides, only 8 trials are needed to predict the target character with the designed BCI system.
7

Implementation and Evaluation of Spectral Subtraction with Minimum Statistics using WOLA and FFT Modulated Filter Banks

Rao, Peddi Srinivas, Sreelatha, Vallabhaneni January 2014 (has links)
In communication system environment speech signal is corrupted due to presence of additive acoustic noise, so with this distortion the effective communication is degraded in terms of the quality and intelligibility of speech. Now present research is going how effectively acoustic noise can be eliminated without affecting the original speech quality, this tends to be our challenging in this current research thesis work. Here this work proposes multi-tiered detection method that is based on time-frequency analysis (i.e. filter banks concept) of the noisy speech signals, by using standard speech enhancement method based on the proven spectral subtraction, for single channel speech data and for a wide range of noise types at various noise levels. There were various variants have been introduced to standard spectral subtraction proposed by S.F.Boll. In this thesis we designed and implemented a novel approach of Spectral Subtraction based on Minimum Statistics [MinSSS]. This means that the power spectrum of the non-stationary noise signal is estimated by finding the minimum values of a smoothed power spectrum of the noisy speech signal and thus circumvents the speech activity detection problem. This approach is also capable of dealing with non-stationary noise signals. In order to analyze the system in time frequency domain, we have implemented two different filter bank approaches such as Weighted OverLap Added (WOLA) and Fast Fourier Transform Modulated (FFTMod). The proposed systems were implemented and evaluated offline using simulation tool Matlab and then validated their performances based on the objective quality measures such as Signal to Noise Ratio Improvement (SNRI) and Perceptual Evaluation Speech Quality (PESQ) measure. The systems were tested with a pure speech combination of male and female sampled at 8 kHz, these signals were corrupted with various kinds of noises at different noise power levels. The MinSSS algorithm implemented using FFTMod filter bank approach outperforms when compared the WOLA filter bank approach.
8

Robustness And Localization In Time-Varying Spectral Estimation

Viswanath, G 01 1900 (has links) (PDF)
No description available.
9

Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled Signals

Sreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).” We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech. Improved iterative Wiener filtering for speech enhancement A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison. Optimal local polynomial modeling and applications We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed. Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments. The generic signal model is x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1. In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples. We show that, in both cases, the bias and variance take the general form: The mean square error (MSE) is given by where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc. The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.
10

Ενίσχυση σημάτων μουσικής υπό το περιβάλλον θορύβου

Παπανικολάου, Παναγιώτης 20 October 2010 (has links)
Στην παρούσα εργασία επιχειρείται η εφαρμογή αλγορίθμων αποθορυβοποίησης σε σήματα μουσικής και η εξαγωγή συμπερασμάτων σχετικά με την απόδοση αυτών ανά μουσικό είδος. Η κύρια επιδίωξη είναι να αποσαφηνιστούν τα βασικά προβλήματα της ενίσχυσης ήχων και να παρουσιαστούν οι διάφοροι αλγόριθμοι που έχουν αναπτυχθεί για την επίλυση των προβλημάτων αυτών. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η τεχνολογία ενίσχυσης ομιλίας. Στην συνέχεια εξετάζονται και αναλύονται αντιπροσωπευτικοί αλγόριθμοι από κάθε κατηγορία τεχνικών αποθορυβοποίησης, την κατηγορία φασματικής αφαίρεσης, την κατηγορία στατιστικών μοντέλων και αυτήν του υποχώρου. Για να μπορέσουμε να αξιολογήσουμε την απόδοση των παραπάνω αλγορίθμων χρησιμοποιούμε αντικειμενικές μετρήσεις ποιότητας, τα αποτελέσματα των οποίων μας δίνουν την δυνατότητα να συγκρίνουμε την απόδοση του κάθε αλγορίθμου. Με την χρήση τεσσάρων διαφορετικών μεθόδων αντικειμενικών μετρήσεων διεξάγουμε τα πειράματα εξάγοντας μια σειρά ενδεικτικών τιμών που μας δίνουν την ευχέρεια να συγκρίνουμε είτε τυχόν διαφοροποιήσεις στην απόδοση των αλγορίθμων της ίδιας κατηγορίας είτε διαφοροποιήσεις στο σύνολο των αλγορίθμων. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγορίθμου αλλά και με την καταλληλότητα του κάθε αλγορίθμου για συγκεκριμένες συνθήκες θορύβου και για συγκεκριμένο μουσικό είδος. / This thesis attempts to apply Noise Reduction algorithms to signals of music and draw conclusions concerning the performance of each algorithm for every musical genre. The main aims are to clarify the basic problems of sound enhancement and present the various algorithms developed for solving these problems. After a brief introduction to basic concepts on sound enhancement we examine and analyze various algorithms that have been proposed at times in the literature for speech enhancement. These algorithms can be divided into three main classes: spectral subtractive algorithms, statistical-model-based algorithms and subspace algorithms. In order to evaluate the performance of the above algorithms we use objective measures of quality, the results of which give us the opportunity to compare the performance of each algorithm. By using four different methods of objective measures to conduct the experiments we draw a set of values that facilitate us to make within-class algorithm comparisons and across-class algorithm comparisons. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions and music genre.

Page generated in 0.1068 seconds