• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 6
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Smooth relevance vector machines

Schmolck, Alexander January 2008 (has links)
Regression tasks belong to the set of core problems faced in statistics and machine learning and promising approaches can often be generalized to also deal with classification, interpolation or denoising problems. Whereas the most widely used classical statistical techniques place severe a priori constraints on the type of function that can be approximated (e.g. only lines, in the case of linear regression), the successes of sparse kernel learners, such as the SVM (support vector machine) demonstrate that good results may be obtained in a quite general framework by enforcing sparsity. Similarly, even very simple sparsity-based denoising techniques, such as classical wavelet shrinkage, can produce surprisingly good results on a wide variety of different signals, because, unlike noise, most signals of practical interest share vital characteristics (such as smoothness, or the ability to be well approximated by piece-wise linear polynomials of a low order) that allow a sparse representation in wavelet space. On the other hand results obtained from SVMs (and classical wavelet-shrinkage) suffer from a certain lack of interpretability, since one cannot straightforwardly attach probabilities to them. By contrast regression, and even more importantly classification, in a Bayesian context always entails a probabilistic measure of confidence in the results, which, provided the model assumptions are reasonably accurate, forms a basis for principled decision-making. The relevance vector machine (RVM) combines these strengths by explicitly encoding the criterion of model sparsity as a (Bayesian) prior over the model weights and offers a single, unified paradigm to efficiently deal with regression as well as classification tasks. However the lack of an explicit prior structure over the weight variances means that the degree of sparsity is to a large extent controlled by the choice of kernel (and kernel parameters). This can lead to severe overfitting or oversmoothing -- possibly even both at the same time (e.g. for the multiscale Doppler data). This thesis details an efficient scheme to control sparsity in Bayesian regression by incorporating a flexible noise-dependent smoothness prior into the RVM. The resultant smooth RVM (sRVM) encompasses the original RVM as a special case, but empirical results with a variety of popular data sets show that it can surpass RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance in many cases. As the smoothness prior effectively makes it possible to use (highly efficient) wavelet kernels in an RVM setting this work also unveils a strong connection between Bayesian wavelet shrinkage and RVM regression and effectively further extends the applicability of the RVM to denoising tasks for up to millions of datapoints. We further discuss its applicability to classification tasks.
2

Bayesian variable selection in clustering via dirichlet process mixture models

Kim, Sinae 17 September 2007 (has links)
The increased collection of high-dimensional data in various fields has raised a strong interest in clustering algorithms and variable selection procedures. In this disserta- tion, I propose a model-based method that addresses the two problems simultane- ously. I use Dirichlet process mixture models to define the cluster structure and to introduce in the model a latent binary vector to identify discriminating variables. I update the variable selection index using a Metropolis algorithm and obtain inference on the cluster structure via a split-merge Markov chain Monte Carlo technique. I evaluate the method on simulated data and illustrate an application with a DNA microarray study. I also show that the methodology can be adapted to the problem of clustering functional high-dimensional data. There I employ wavelet thresholding methods in order to reduce the dimension of the data and to remove noise from the observed curves. I then apply variable selection and sample clustering methods in the wavelet domain. Thus my methodology is wavelet-based and aims at clustering the curves while identifying wavelet coefficients describing discriminating local features. I exemplify the method on high-dimensional and high-frequency tidal volume traces measured under an induced panic attack model in normal humans.
3

Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising

DeNooyer, Eric-Jan D. 01 January 2010 (has links)
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
4

Optimal Transport Dictionary Learning and Non-negative Matrix Factorization / 最適輸送辞書学習と非負値行列因子分解

Rolet, Antoine 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23314号 / 情博第750号 / 新制||情||128(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 山本 章博, 教授 鹿島 久嗣, 教授 河原 達也 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
5

One-dimensional Real-time Signal Denoising Using Wavelet-based Kalman Filtering

Durmaz, Murat 01 April 2007 (has links) (PDF)
Denoising signals is an important task of digital signal processing. Many linear and non-linear methods for signal denoising have been developed. Wavelet based denoising is the most famous nonlinear denoising method lately. In the linear case, Kalman filter is famous for its easy implementation and real-time nature. Wavelet- Kalman filter developed lately is an important improvement over Kalman filter, in which the Kalman filter operates in the wavelet domain, filtering the wavelet coeffi- cients, and resulting in the filtered wavelet transform of the signal in real-time. The real-time filtering and multiresolution representation is a powerful feature for many real world applications. This study explains in detail the derivation and implementation of Real-Time Wavelet-Kalman Filter method to remove noise from signals in real-time. The filter is enhanced to use different wavelet types than the Haar wavelet, and also it is improved to operate on higer block sizes than two. Wavelet shrinkage is integrated to the filter and it is shown that by utilizing this integration more noise suppression is obtainable. A user friendly application is developed to import, filter and export signals in Java programming language. And finally, the applicability of the proposed method to suppress noise from seismic waves coming from eartquakes and to enhance spontaneous potentials measured from groundwater wells is also shown.
6

Étude et développement d'un dispositif routier d'anticollision basé sur un radar ultra large bande pour la détection et l'identification notamment des usagers vulnérables / Study and development of a road collision avoidance system based on ultra wide-band radar for obstacles detection and identification dedicated to vulnerable road users

Sadli, Rahmad 12 March 2019 (has links)
Dans ce travail de thèse, nous présentons nos travaux qui portent sur l’identification des cibles en général par un radar Ultra-Large Bande (ULB) et en particulier l’identification des cibles dont la surface équivalente radar est faible telles que les piétons et les cyclistes. Ce travail se décompose en deux parties principales, la détection et la reconnaissance. Dans la première approche du processus de détection, nous avons proposé et étudié un détecteur de radar ULB robuste qui fonctionne avec des données radar 1-D (A-scan) à une dimension. Il exploite la combinaison des statistiques d’ordres supérieurs et du détecteur de seuil automatique connu sous le nom de CA-CFAR pour Cell-Averaging Constant False Alarm Rate. Cette combinaison est effectuée en appliquant d’abord le HOS sur le signal reçu afin de supprimer une grande partie du bruit. Puis, après avoir éliminé le bruit du signal radar reçu, nous implémentons le détecteur de seuil automatique CA-CFAR. Ainsi, cette combinaison permet de disposer d’un détecteur de radar ULB à seuil automatique robuste. Afin d’améliorer le taux de détection et aller plus loin dans le traitement, nous avons évalué l’approche des données radar 2-D (B-Scan) à deux dimensions. Dans un premier temps, nous avons proposé une nouvelle méthode de suppression du bruit, qui fonctionne sur des données B-Scan. Il s’agit d’une combinaison de WSD et de HOS. Pour évaluer les performances de cette méthode, nous avons fait une étude comparative avec d’autres techniques de suppression du bruit telles que l’analyse en composantes principales, la décomposition en valeurs singulières, la WSD, et la HOS. Les rapports signal à bruit -SNR- des résultats finaux montrent que les performances de la combinaison WSD et HOS sont meilleures que celles des autres méthodes rencontrées dans la littérature. A la phase de reconnaissance, nous avons exploité les données des deux approches à 1-D et à 2-D obtenues à partir du procédé de détection. Dans la première approche à 1-D, les techniques SVM et le DBN sont utilisées et évaluées pour identifier la cible en se basant sur la signature radar. Les résultats obtenus montrent que la technique SVM donne de bonnes performances pour le système proposé où le taux de reconnaissance global moyen atteint 96,24%, soit respectivement 96,23%, 95,25% et 97,23% pour le cycliste, le piéton et la voiture. Dans la seconde approche à 1-D, les performances de différents types d’architectures DBN composées de différentes couches ont été évaluées et comparées. Nous avons constaté que l’architecture du réseau DBN avec quatre couches cachées est meilleure et la précision totale moyenne peut atteindre 97,80%. Ce résultat montre que les performances obtenues avec le DBN sont meilleures que celles obtenues avec le SVM (96,24%) pour ce système de reconnaissance de cible utilisant un radar ULB. Dans l’approche bidimensionnelle, le réseau de neurones convolutifs a été utilisé et évalué. Nous avons proposé trois architectures de CNN. La première est le modèle modifié d’Alexnet, la seconde est une architecture avec les couches de convolution arborescentes et une couche entièrement connectée, et la troisième est une architecture avec les cinq couches de convolution et deux couches entièrement connectées. Après comparaison et évaluation des performances de ces trois architectures proposées nous avons constaté que la troisième architecture offre de bonnes performances par rapport aux autres propositions avec une précision totale moyenne qui peut atteindre 99,59%. Enfin, nous avons effectué une étude comparative des performances obtenues avec le CNN, DBN et SVM. Les résultats montrent que CNN a les meilleures performances en termes de précision par rapport à DBN et SVM. Cela signifie que l’utilisation de CNN dans les données radar bidimensionnels permet de classer correctement les cibles radar ULB notamment pour les cibles à faible SER et SNR telles que les cyclistes ou les piétons. / In this thesis work, we focused on the study and development of a system identification using UWB-Ultra-Wide-Band short range radar to detect the objects and particularly the vulnerable road users (VRUs) that have low RCS-Radar Cross Section- such as cyclist and pedestrian. This work is composed of two stages i.e. detection and recognition. In the first approach of detection stage, we have proposed and studied a robust UWB radar detector that works on one dimension 1-D radar data ( A-scan). It relies on a combination of Higher Order Statistics (HOS) and the well-known CA-CFAR (Cell-Averaging Constant False Alarm Rate) detector. This combination is performed by firstly applying the HOS to the received radar signal in order to suppress the noise. After eliminating the noise of the received radar signal, we apply the CA-CFAR detector. By doing this combination, we finally have an UWB radar detector which is robust against the noise and works with the adaptive threshold. In order to enhance the detection performance, we have evaluated the approach of using two dimensions 2-D (B-Scan) radar data. In this 2-D radar approach, we proposed a new method of noise suppression, which works on this B-Scan data. The proposed method is a combination of WSD (Wavelet Shrinkage Denoising) and HOS. To evaluate the performance of this method, we performed a comparative study with the other noise removal methods in literature including Principal Component Analysis (PCA), Singular Value Decomposition (SVD), WSD and HOS. The Signal-to-Noise Ratio (SNR) of the final result has been computed to compare the effectiveness of individual noise removal techniques. It is observed that a combination of WSD and HOS has better capability to remove the noise compared to that of the other applied techniques in the literature; especially it is found that it allows to distinguish efficiency the pedestrian and cyclist over the noise and clutters whereas other techniques are not showing significant result. In the recognition phase, we have exploited the data from the two approaches 1-D and 2-D, obtained from the detection method. In the first 1-D approach, Support Vector Machines (SVM) and Deep Belief Networks (DBN) have been used and evaluated to identify the target based on the radar signature. The results show that the SVM gives good performances for the proposed system where the total recognition accuracy rate could achieve up to 96,24%. In the second approach of this 1-D radar data, the performance of several DBN architectures compose of different layers have been evaluated and compared. We realised that the DBN architecture with four hidden layers performs better than those of with two or three hidden layers. The results show also that this architecture achieves up to 97.80% of accuracy. This result also proves that the performance of DBN is better than that of SVM (96.24%) in the case of UWB radar target recognition system using 1-D radar signature. In the 2-D approach, the Convolutional Neural Network (CNN) has been exploited and evaluated. In this work, we have proposed and investigated three CNN architectures. The first architecture is the modified of Alexnet model, the second is an architecture with three convolutional layers and one fully connected layer, and the third is an architecture with five convolutional layers and two fully connected layers. The performance of these proposed architectures have been evaluated and compared. We found that the third architecture has a good performance where it achieves up to 99.59% of accuracy. Finally, we compared the performances obtained using CNN, DBN and SVM. The results show that CNN gives a better result in terms of accuracy compared to that of DBN and SVM. It allows to classify correctly the UWB radar targets like cyclist and pedestrian.

Page generated in 0.0708 seconds