• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 24
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 19
  • 16
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Etude de la bornitude des transformées de Riesz sur Lp via le Laplacien de Hodge-de Rham / Boundedness of the Riesz transforms on Lp via the Hodge-de Rham Laplacian

Magniez, Jocelyn 06 November 2015 (has links)
Cette thèse comporte deux sujets d’étude mêlés. Le premier concerne l’étude de la bornitude sur Lp de la transformée de Riesz d∆-½ , où ∆ désigne l’opérateur de Laplace-Beltrami (positif). Le second traite de la régularité de Sobolev W1,p de la solution de l’équation de la chaleur non perturbée. Nous établissons également quelques résultats concernant les transformées de Riesz d’opérateurs de Schrödinger avec un potentiel comportant éventuellement une partie négative.Dans le cadre de ces travaux, nous nous plaçons sur une variété riemanienne (M, g) complète et non compacte. Nous supposons que M satisfait la propriété de doublement de volume (de constante de doublement égale à D) ainsi qu’une estimation gaussienne supérieure pour son noyau de la chaleur (celui associé à l’opérateur ∆). Nous travaillons avec le laplacien de Hodge-de Rham, noté ∆, agissant sur les 1-formes différentielles de M. En s’appuyant sur la formule de Bochner, liant ∆ à la courbure de Ricci de M, nous assimilons ∆ à un opérateur de Schrödinger à valeurs vectorielles. C’est un argument de dualité, basé sur une formule de commutation algébrique, qui lie l’étude de ∆ à celle de ∆. [...] / This thesis has two main parts. The first one deals with the study of the boundedness on Lp of the Riesz transform d∆-½ , where ∆ denotes the nonnegative Laplace-Beltrami operator. The second one deals with the Sobolev regularity W1,p of the solution of the heat equation. We also establish some results on the Riesz transforms of Schrödinger operators with a potential possibly having a negative part. In this work, we consider a complete non-compact Riemannian manifold (M, g). We assume that M satisfies the volume doubling property (with doubling constant equal to D) as well as a Gaussian upper estimate for its heat kernel associated to the operator ∆. We work with the Hodge-de Rham Laplacian ∆, acting on 1-differential forms of M. With the Bochner formula, linking ∆to the Ricci curvature of M, we see ∆ has a vector-valued Schrödinger operator. It is a duality argument, based on a commutation formula, which links the study of ∆to the one of ∆. [...]
102

Cooperative Execution of Opencl Programs on Multiple Heterogeneous Devices

Pandit, Prasanna Vasant January 2013 (has links) (PDF)
Computing systems have become heterogeneous with the increasing prevalence of multi-core CPUs, Graphics Processing Units (GPU) and other accelerators in them. OpenCL has emerged as an attractive programming framework for heterogeneous systems. However, utilizing mul- tiple devices in OpenCL is a challenge as it requires the programmer to explicitly map data and computation to each device. Utilizing multiple devices simultaneously to speed up execu- tion of a kernel is even more complex, as the relative execution time of the kernel on different devices can vary significantly. Also, after each kernel execution, a coherent version of the data needs to be established. This means that, in order to utilize all devices effectively, the programmer has to spend considerable time and effort to distribute work across all devices, keep track of modified data in these devices and correctly perform a merging step to put the data together. Further, the relative performance of a program may vary across different inputs, which means a statically determined work distribution may not work well. In this work, we present FluidiCL, an OpenCL runtime that takes a program written for a single device and uses multiple heterogeneous devices to execute each kernel. The runtime performs dynamic work distribution and cooperatively executes each kernel on all available devices. Since we consider a setup with devices having discrete address spaces, our solution ensures that execution of OpenCL work-groups on devices is adjusted by taking into account the overheads for data management. The data transfers and data merging needed to ensure coherence are handled transparently without requiring any effort from the programmer. Flu- idiCL also does not require prior training or profiling and is completely portable across dif- ferent machines. Because it is dynamic, the runtime is able to adapt to system load. We have developed several optimizations for improving the performance of FluidiCL. We evaluate the runtime across different sets of devices. On a machine with an Intel quad-core processor and an NVidia Fermi GPU, FluidiCL shows a geomean speedup of nearly 64% over the GPU, 88% over the CPU and 14% over the best of the two devices in each benchmark. In all benchmarks, performance of our runtime comes to within 13% of the best of the two devices. FluidiCL shows similar results on a machine with a quad-core CPU and an NVidia Kepler GPU, with up to 26% speedup over the best of the two. We also present results considering an Intel Xeon Phi accelerator and a CPU and find that FluidiCL performs up to 45% faster than the best of the two devices. We extend FluidiCL from a CPU–GPU scenario to a three-device setup hav- ing a quad-core CPU, an NVidia Kepler GPU and an Intel Xeon Phi accelerator and find that FluidiCL obtains a geomean improvement of 6% in kernel execution time over the best of the three devices considered in each case.
103

Novel measures on directed graphs and applications to large-scale within-network classification

Mantrach, Amin 25 October 2010 (has links)
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable.<p>La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré :la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise :en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre :allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art.<p>La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné :le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs.<p>La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances.<p>La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines.<p><p><p>In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time. <p>The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques.<p>The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e. classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes. <p>The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results.<p>The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.<p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
104

Estimation adaptative avec des données transformées ou incomplètes. Application à des modèles de survie / Adaptive estimation with warped or incomplete data. Application to survival analysis

Chagny, Gaëlle 05 July 2013 (has links)
Cette thèse présente divers problèmes d'estimation fonctionnelle adaptative par sélection d'estimateurs en projection ou à noyaux, utilisant des critères inspirés à la fois de la sélection de modèles et des méthodes de Lepski. Le point commun de nos travaux est l'utilisation de données transformées et/ou incomplètes. La première partie est consacrée à une procédure d'estimation par "déformation'', dont la pertinence est illustrée pour l'estimation des fonctions suivantes : régression additive et multiplicative, densité conditionnelle, fonction de répartition dans un modèle de censure par intervalle, risque instantané pour des données censurées à droite. Le but est de reconstruire une fonction à partir d'un échantillon de couples aléatoires (X,Y). Nous utilisons les données déformées (ф(X),Y) pour proposer des estimateurs adaptatifs, où ф est une fonction bijective que nous estimons également (par exemple la fonction de répartition de X). L'intérêt est double : d'un point de vue théorique, les estimateurs ont des propriétés d'optimalité au sens de l'oracle ; d'un point de vue pratique, ils sont explicites et numériquement stables. La seconde partie s'intéresse à un problème à deux échantillons : nous comparons les distributions de deux variables X et Xₒ au travers de la densité relative, définie comme la densité de la variable Fₒ(X) (Fₒ étant la répartition de Xₒ). Nous construisons des estimateurs adaptatifs, à partir d'un double échantillon de données, possiblement censurées. Des bornes de risque non-asymptotiques sont démontrées, et des vitesses de convergences déduites. / This thesis presents various problems of adaptive functional estimation, using projection and kernel methods, and criterions inspired both by model selection and Lepski's methods. The common point of the studied statistical setting is to deal with transformed and/or incomplete data. The first part proposes a method of estimation with a "warping" device which permits to handle the estimation of functions such as additive and multiplicative regression, conditional density, hazard rate based on randomly right-censored data, and cumulative distribution function from current-status data. The aim is to estimate a function from a sample of random variable (X,Y). We use the warped data (ф(X),Y), to propose adaptive estimators, where ф is a one-to-one function that we also estimate (e.g. the cumulative distribution function of X). The interest is twofold. From the theoretical point of view, the estimators are optimal in the oracle sense. From the practical point of view, they can be easily computed, thanks to their simple explicit expression. The second part deals with a two-sample problem : we compare the distribution of two variables X and Xₒ by studying the relative density, defined as the density of Fₒ(X) (Fₒ is the c.d.f. of Xₒ). We build adaptive estimators, from a double data-sample, possibly censored. Non-asymptotic risk bounds are proved, and convergence rates are also derived.
105

Pairwise Classification and Pairwise Support Vector Machines

Brunner, Carl 16 May 2012 (has links)
Several modifications have been suggested to extend binary classifiers to multiclass classification, for instance the One Against All technique, the One Against One technique, or Directed Acyclic Graphs. A recent approach for multiclass classification is the pairwise classification, which relies on two input examples instead of one and predicts whether the two input examples belong to the same class or to different classes. A Support Vector Machine (SVM), which is able to handle pairwise classification tasks, is called pairwise SVM. A common pairwise classification task is face recognition. In this area, a set of images is given for training and another set of images is given for testing. Often, one is interested in the interclass setting. The latter means that any person which is represented by an image in the training set is not represented by any image in the test set. From the mentioned multiclass classification techniques only the pairwise classification technique provides meaningful results in the interclass setting. For a pairwise classifier the order of the two examples should not influence the classification result. A common approach to enforce this symmetry is the use of selected kernels. Relations between such kernels and certain projections are provided. It is shown, that those projections can lead to an information loss. For pairwise SVMs another approach for enforcing symmetry is the symmetrization of the training sets. In other words, if the pair (a,b) of examples is a training pair then (b,a) is a training pair, too. It is proven that both approaches do lead to the same decision function for selected parameters. Empirical tests show that the approach using selected kernels is three to four times faster. For a good interclass generalization of pairwise SVMs training sets with several million training pairs are needed. A technique is presented which further speeds up the training time of pairwise SVMs by a factor of up to 130 and thus enables the learning of training sets with several million pairs. Another element affecting time is the need to select several parameters. Even with the applied speed up techniques a grid search over the set of parameters would be very expensive. Therefore, a model selection technique is introduced that is much less computationally expensive. In machine learning, the training set and the test set are created by using some data generating process. Several pairwise data generating processes are derived from a given non pairwise data generating process. Advantages and disadvantages of the different pairwise data generating processes are evaluated. Pairwise Bayes' Classifiers are introduced and their properties are discussed. It is shown that pairwise Bayes' Classifiers for interclass generalization tasks can differ from pairwise Bayes' Classifiers for interexample generalization tasks. In face recognition the interexample task implies that each person which is represented by an image in the test set is also represented by at least one image in the training set. Moreover, the set of images of the training set and the set of images of the test set are disjoint. Pairwise SVMs are applied to four synthetic and to two real world datasets. One of the real world datasets is the Labeled Faces in the Wild (LFW) database while the other one is provided by Cognitec Systems GmbH. Empirical evidence for the presented model selection heuristic, the discussion about the loss of information and the provided speed up techniques is given by the synthetic databases and it is shown that classifiers of pairwise SVMs lead to a similar quality as pairwise Bayes' classifiers. Additionally, a pairwise classifier is identified for the LFW database which leads to an average equal error rate (EER) of 0.0947 with a standard error of the mean (SEM) of 0.0057. This result is better than the result of the current state of the art classifier, namely the combined probabilistic linear discriminant analysis classifier, which leads to an average EER of 0.0993 and a SEM of 0.0051. / Es gibt verschiedene Ansätze, um binäre Klassifikatoren zur Mehrklassenklassifikation zu nutzen, zum Beispiel die One Against All Technik, die One Against One Technik oder Directed Acyclic Graphs. Paarweise Klassifikation ist ein neuerer Ansatz zur Mehrklassenklassifikation. Dieser Ansatz basiert auf der Verwendung von zwei Input Examples anstelle von einem und bestimmt, ob diese beiden Examples zur gleichen Klasse oder zu unterschiedlichen Klassen gehören. Eine Support Vector Machine (SVM), die für paarweise Klassifikationsaufgaben genutzt wird, heißt paarweise SVM. Beispielsweise werden Probleme der Gesichtserkennung als paarweise Klassifikationsaufgabe gestellt. Dazu nutzt man eine Menge von Bildern zum Training und ein andere Menge von Bildern zum Testen. Häufig ist man dabei an der Interclass Generalization interessiert. Das bedeutet, dass jede Person, die auf wenigstens einem Bild der Trainingsmenge dargestellt ist, auf keinem Bild der Testmenge vorkommt. Von allen erwähnten Mehrklassenklassifikationstechniken liefert nur die paarweise Klassifikationstechnik sinnvolle Ergebnisse für die Interclass Generalization. Die Entscheidung eines paarweisen Klassifikators sollte nicht von der Reihenfolge der zwei Input Examples abhängen. Diese Symmetrie wird häufig durch die Verwendung spezieller Kerne gesichert. Es werden Beziehungen zwischen solchen Kernen und bestimmten Projektionen hergeleitet. Zudem wird gezeigt, dass diese Projektionen zu einem Informationsverlust führen können. Für paarweise SVMs ist die Symmetrisierung der Trainingsmengen ein weiter Ansatz zur Sicherung der Symmetrie. Das bedeutet, wenn das Paar (a,b) von Input Examples zur Trainingsmenge gehört, dann muss das Paar (b,a) ebenfalls zur Trainingsmenge gehören. Es wird bewiesen, dass für bestimmte Parameter beide Ansätze zur gleichen Entscheidungsfunktion führen. Empirische Messungen zeigen, dass der Ansatz mittels spezieller Kerne drei bis viermal schneller ist. Um eine gute Interclass Generalization zu erreichen, werden bei paarweisen SVMs Trainingsmengen mit mehreren Millionen Paaren benötigt. Es wird eine Technik eingeführt, die die Trainingszeit von paarweisen SVMs um bis zum 130-fachen beschleunigt und es somit ermöglicht, Trainingsmengen mit mehreren Millionen Paaren zu verwenden. Auch die Auswahl guter Parameter für paarweise SVMs ist im Allgemeinen sehr zeitaufwendig. Selbst mit den beschriebenen Beschleunigungen ist eine Gittersuche in der Menge der Parameter sehr teuer. Daher wird eine Model Selection Technik eingeführt, die deutlich geringeren Aufwand erfordert. Im maschinellen Lernen werden die Trainingsmenge und die Testmenge von einem Datengenerierungsprozess erzeugt. Ausgehend von einem nicht paarweisen Datengenerierungsprozess werden unterschiedliche paarweise Datengenerierungsprozesse abgeleitet und ihre Vor- und Nachteile bewertet. Es werden paarweise Bayes-Klassifikatoren eingeführt und ihre Eigenschaften diskutiert. Es wird gezeigt, dass sich diese Bayes-Klassifikatoren für Interclass Generalization Aufgaben und für Interexample Generalization Aufgaben im Allgemeinen unterscheiden. Bei der Gesichtserkennung bedeutet die Interexample Generalization, dass jede Person, die auf einem Bild der Testmenge dargestellt ist, auch auf mindestens einem Bild der Trainingsmenge vorkommt. Außerdem ist der Durchschnitt der Menge der Bilder der Trainingsmenge mit der Menge der Bilder der Testmenge leer. Paarweise SVMs werden an vier synthetischen und an zwei Real World Datenbanken getestet. Eine der verwendeten Real World Datenbanken ist die Labeled Faces in the Wild (LFW) Datenbank. Die andere wurde von Cognitec Systems GmbH bereitgestellt. Die Annahmen der Model Selection Technik, die Diskussion über den Informationsverlust, sowie die präsentierten Beschleunigungstechniken werden durch empirische Messungen mit den synthetischen Datenbanken belegt. Zudem wird mittels dieser Datenbanken gezeigt, dass Klassifikatoren von paarweisen SVMs zu ähnlich guten Ergebnissen wie paarweise Bayes-Klassifikatoren führen. Für die LFW Datenbank wird ein paarweiser Klassifikator bestimmt, der zu einer durchschnittlichen Equal Error Rate (EER) von 0.0947 und einem Standard Error of The Mean (SEM) von 0.0057 führt. Dieses Ergebnis ist besser als das des aktuellen State of the Art Klassifikators, dem Combined Probabilistic Linear Discriminant Analysis Klassifikator. Dieser führt zu einer durchschnittlichen EER von 0.0993 und einem SEM von 0.0051.
106

Automatic Pronoun Resolution for Swedish / Automatisk pronomenbestämning på svenska

Ahlenius, Camilla January 2020 (has links)
This report describes a quantitative analysis performed to compare two different methods on the task of pronoun resolution for Swedish. The first method, an implementation of Mitkov’s algorithm, is a heuristic-based method — meaning that the resolution is determined by a number of manually engineered rules regarding both syntactic and semantic information. The second method is data-driven — a Support Vector Machine (SVM) using dependency trees and word embeddings as features. Both methods are evaluated on an annotated corpus of Swedish news articles which was created as a part of this thesis. SVM-based methods significantly outperformed the implementation of Mitkov’s algorithm. The best performing SVM model relies on tree kernels applied to dependency trees. The model achieved an F1-score of 0.76 for the positive class and 0.9 for the negative class, where positives are pairs of pronoun and noun phrase that corefer, and negatives are pairs that do not corefer. / Rapporten beskriver en kvantitativ analys som genomförts för att jämföra två olika metoder för automatisk pronomenbestämning på svenska. Den första metoden, en implementation av Mitkovs algoritm, är en heuristisk metod vilket innebär att pronomenbestämningen görs med ett antal manuellt utformade regler som avser att fånga både syntaktisk och semantisk information. Den andra metoden är datadriven, en stödvektormaskin (SVM) som använder dependensträd och ordvektorer som särdrag. Båda metoderna utvärderades med hjälp av en annoterad datamängd bestående av svenska nyhetsartiklar som skapats som en del av denna avhandling. Den datadrivna metoden överträffade Mitkovs algoritm. Den SVM-modell som ger bäst resultat bygger på trädkärnor som tillämpas på dependensträd. Modellen uppnådde ett F1-värde på 0.76 för den positiva klassen och 0.9 för den negativa klassen, där de positiva datapunkterna utgörs av ett par av pronomen och nominalfras som korefererar, och de negativa datapunkterna utgörs av par som inte korefererar.
107

Bayes Optimality in Classification, Feature Extraction and Shape Analysis

Hamsici, Onur C. 11 September 2008 (has links)
No description available.
108

Kernel Methods for Nonlinear Identification, Equalization and Separation of Signals

Vaerenbergh, Steven Van 03 February 2010 (has links)
En la última década, los métodos kernel (métodos núcleo) han demostrado ser técnicas muy eficaces en la resolución de problemas no lineales. Parte de su éxito puede atribuirse a su sólida base matemática dentro de los espacios de Hilbert generados por funciones kernel ("reproducing kernel Hilbert spaces", RKHS); y al hecho de que resultan en problemas convexos de optimización. Además, son aproximadores universales y la complejidad computacional que requieren es moderada. Gracias a estas características, los métodos kernel constituyen una alternativa atractiva a las técnicas tradicionales no lineales, como las series de Volterra, los polinómios y las redes neuronales. Los métodos kernel también presentan ciertos inconvenientes que deben ser abordados adecuadamente en las distintas aplicaciones, por ejemplo, las dificultades asociadas al manejo de grandes conjuntos de datos y los problemas de sobreajuste ocasionados al trabajar en espacios de dimensionalidad infinita.En este trabajo se desarrolla un conjunto de algoritmos basados en métodos kernel para resolver una serie de problemas no lineales, dentro del ámbito del procesado de señal y las comunicaciones. En particular, se tratan problemas de identificación e igualación de sistemas no lineales, y problemas de separación ciega de fuentes no lineal ("blind source separation", BSS). Esta tesis se divide en tres partes. La primera parte consiste en un estudio de la literatura sobre los métodos kernel. En la segunda parte, se proponen una serie de técnicas nuevas basadas en regresión con kernels para resolver problemas de identificación e igualación de sistemas de Wiener y de Hammerstein, en casos supervisados y ciegos. Como contribución adicional se estudia el campo del filtrado adaptativo mediante kernels y se proponen dos algoritmos recursivos de mínimos cuadrados mediante kernels ("kernel recursive least-squares", KRLS). En la tercera parte se tratan problemas de decodificación ciega en que las fuentes son dispersas, como es el caso en comunicaciones digitales. La dispersidad de las fuentes se refleja en que las muestras observadas se agrupan, lo cual ha permitido diseñar técnicas de decodificación basadas en agrupamiento espectral. Las técnicas propuestas se han aplicado al problema de la decodificación ciega de canales MIMO rápidamente variantes en el tiempo, y a la separación ciega de fuentes post no lineal. / In the last decade, kernel methods have become established techniques to perform nonlinear signal processing. Thanks to their foundation in the solid mathematical framework of reproducing kernel Hilbert spaces (RKHS), kernel methods yield convex optimization problems. In addition, they are universal nonlinear approximators and require only moderate computational complexity. These properties make them an attractive alternative to traditional nonlinear techniques such as Volterra series, polynomial filters and neural networks.This work aims to study the application of kernel methods to resolve nonlinear problems in signal processing and communications. Specifically, the problems treated in this thesis consist of the identification and equalization of nonlinear systems, both in supervised and blind scenarios, kernel adaptive filtering and nonlinear blind source separation.In a first contribution, a framework for identification and equalization of nonlinear Wiener and Hammerstein systems is designed, based on kernel canonical correlation analysis (KCCA). As a result of this study, various other related techniques are proposed, including two kernel recursive least squares (KRLS) algorithms with fixed memory size, and a KCCA-based blind equalization technique for Wiener systems that uses oversampling. The second part of this thesis treats two nonlinear blind decoding problems of sparse data, posed under conditions that do not permit the application of traditional clustering techniques. For these problems, which include the blind decoding of fast time-varying MIMO channels, a set of algorithms based on spectral clustering is designed. The effectiveness of the proposed techniques is demonstrated through various simulations.
109

Estimação da causalidade de Granger no caso de interação não-linear. / Nonlinear connectivity estimation by Granger causality technique.

Massaroppe, Lucas 08 August 2016 (has links)
Esta tese examina o problema de detecção de conectividade entre séries temporais no sentido de Granger no caso em que a natureza não linear das interações não permite sua determinação por meio de modelos auto-regressivos lineares vetoriais. Mostra-se que é possível realizar esta detecção com auxílio dos chamados métodos de Kernel, que se tornaram populares em aprendizado por máquina (\'machine learning\') já que tais métodos permitem definir formas generalizadas de teste de Granger, coerência parcial direcionada e função de transferência direcionada. Usando simulações, mostram-se alguns exemplos de detecção nos quais fica também evidente que resultados assintóticos deduzidos originalmente para estimadores lineares podem ser generalizados de modo análogo, mostrando-se válidos no presente contexto kernelizado. / This work examines the connectivity detection problem between time series in the Granger sense when the nonlinear nature of interactions determination is impossible via linear vector autoregressive models, but is, nonetheless, feasible with the aid of the so-called Kernel methods that are popular in machine learning. The kernelization approach allows defining generalised versions for Granger tests, partial directed coherence and directed transfer function, which the simulation of some examples shows that the asymptotic detection results originally deducted for linear estimators, can also be employed under kernelization if suitably adapted.
110

Estimação da causalidade de Granger no caso de interação não-linear. / Nonlinear connectivity estimation by Granger causality technique.

Lucas Massaroppe 08 August 2016 (has links)
Esta tese examina o problema de detecção de conectividade entre séries temporais no sentido de Granger no caso em que a natureza não linear das interações não permite sua determinação por meio de modelos auto-regressivos lineares vetoriais. Mostra-se que é possível realizar esta detecção com auxílio dos chamados métodos de Kernel, que se tornaram populares em aprendizado por máquina (\'machine learning\') já que tais métodos permitem definir formas generalizadas de teste de Granger, coerência parcial direcionada e função de transferência direcionada. Usando simulações, mostram-se alguns exemplos de detecção nos quais fica também evidente que resultados assintóticos deduzidos originalmente para estimadores lineares podem ser generalizados de modo análogo, mostrando-se válidos no presente contexto kernelizado. / This work examines the connectivity detection problem between time series in the Granger sense when the nonlinear nature of interactions determination is impossible via linear vector autoregressive models, but is, nonetheless, feasible with the aid of the so-called Kernel methods that are popular in machine learning. The kernelization approach allows defining generalised versions for Granger tests, partial directed coherence and directed transfer function, which the simulation of some examples shows that the asymptotic detection results originally deducted for linear estimators, can also be employed under kernelization if suitably adapted.

Page generated in 0.0353 seconds