• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Kernel LMS à noyau gaussien : conception, analyse et applications à divers contextes / Gaussian kernel least-mean-square : design, analysis and applications

Gao, Wei 09 December 2015 (has links)
L’objectif principal de cette thèse est de décliner et d’analyser l’algorithme kernel-LMS à noyau Gaussien dans trois cadres différents: celui des noyaux uniques et multiples, à valeurs réelles et à valeurs complexes, dans un contexte d’apprentissage distributé et coopératif dans les réseaux de capteurs. Plus précisement, ce travail s’intéresse à l’analyse du comportement en moyenne et en erreur quadratique de cas différents types d’algorithmes LMS à noyau. Les modèles analytiques de convergence obtenus sont validés par des simulations numérique. Tout d’abord, nous introduisons l’algorithme LMS, les espaces de Hilbert à noyau reproduisants, ainsi que les algorithmes de filtrage adaptatif à noyau existants. Puis, nous étudions analytiquement le comportement de l’algorithme LMS à noyau Gaussien dans le cas où les statistiques des éléments du dictionnaire ne répondent que partiellement aux statistiques des données d’entrée. Nous introduisons ensuite un algorithme LMS modifié à noyau basé sur une approche proximale. La stabilité de l’algorithme est également discutée. Ensuite, nous introduisons deux types d’algorithmes LMS à noyaux multiples. Nous nous concentrons en particulier sur l’analyse de convergence de l’un d’eux. Plus généralement, les caractéristiques des deux algorithmes LMS à noyaux multiples sont analysées théoriquement et confirmées par les simulations. L’algorithme LMS à noyau complexe augmenté est présenté et ses performances analysées. Enfin, nous proposons des stratégies de diffusion fonctionnelles dans les espaces de Hilbert à noyau reproduisant. La stabilité́ de cas de l’algorithme est étudiée. / The main objective of this thesis is to derive and analyze the Gaussian kernel least-mean-square (LMS) algorithm within three frameworks involving single and multiple kernels, real-valued and complex-valued, non-cooperative and cooperative distributed learning over networks. This work focuses on the stochastic behavior analysis of these kernel LMS algorithms in the mean and mean-square error sense. All the analyses are validated by numerical simulations. First, we review the basic LMS algorithm, reproducing kernel Hilbert space (RKHS), framework and state-of-the-art kernel adaptive filtering algorithms. Then, we study the convergence behavior of the Gaussian kernel LMS in the case where the statistics of the elements of the so-called dictionary only partially match the statistics of the input data. We introduced a modified kernel LMS algorithm based on forward-backward splitting to deal with $\ell_1$-norm regularization. The stability of the proposed algorithm is then discussed. After a review of two families of multikernel LMS algorithms, we focus on the convergence behavior of the multiple-input multikernel LMS algorithm. More generally, the characteristics of multikernel LMS algorithms are analyzed theoretically and confirmed by simulation results. Next, the augmented complex kernel LMS algorithm is introduced based on the framework of complex multikernel adaptive filtering. Then, we analyze the convergence behavior of algorithm in the mean-square error sense. Finally, in order to cope with the distributed estimation problems over networks, we derive functional diffusion strategies in RKHS. The stability of the algorithm in the mean sense is analyzed.
2

Proposta do Kernel Sigmoide (KSIG) e sua análise de convergência para a solução de problemas de filtragem adaptativa não linear

Silva, Éden Pereira da 27 January 2017 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Adaptive filtering is applied as solution for many problems in engineer. There are many techniques to improve adaptive filtering as kernel methods and, in addiction, it is used a pretuned dictionary. In this context, here is presented the KSIG algorithm, the kernel version of Sigmoide, where is used the kernel, to decrease the error, and the non-linear and even cost function to increase the convergence speed. Here it is described also, the KSIG with a pretuned dictionary, to reduce the size of the data set used to calculate the filter output, which is a kernel method consequence . The KSIG and KSIG with pre-tuned dictionary theoretical efficiency is one result of their convergence proof, which evidence that the algorithms converge in average. The learning curves, which are results of some experiments, show that when KSIG and KLMS algorithms are compared, the first converges faster, in less iterations, than the second, in the version with and without pre-tuned dictionary of both algorithms. / A filtragem adaptativa é aplicada na solução de diversos problemas da engenharia. Há muitas alternativas para melhorá-la, uma delas é o uso de kernel e, em adição, o uso de um dicionário pré-definido de dados. Neste contexto, este trabalho apresenta o KSIG, a versão em kernel do algoritmo Sigmoide, um algoritmo que otimiza o erro do filtro pelo emprego de uma função de custo par e não linear. Ademais, é apresentada a versão do KSIG com dicionário de dados pré-definido, visando redução do grande número de dados utilizados para obtenção da saída decorrente do uso da técnica com kernel. A eficiência teórica do KSIG e de sua versão com dicionário pré-definido é um resultado presente nas provas de convergência construídas para ambos os algoritmos, as quais demonstraram que estes convergem em média. Já as curvas de aprendizagem obtidas nas simulações computacionais dos experimentos realizados demonstraram que o KSIG quando comparado ao KLMS, em diferentes problemas de filtragem adaptativa, apresenta convergência mais rápida, em menos iterações, tanto nas versões sem tanto com dicionário pré-definido de ambos os algoritmos.

Page generated in 0.0817 seconds