251 |
Biagrupamento heurístico e coagrupamento baseado em fatoração de matrizes: um estudo em dados textuais / Heuristic biclustering and coclustering based on matrix factorization: a study on textual dataRamos Diaz, Alexandra Katiuska 16 October 2018 (has links)
Biagrupamento e coagrupamento são tarefas de mineração de dados que permitem a extração de informação relevante sobre dados e têm sido aplicadas com sucesso em uma ampla variedade de domínios, incluindo aqueles que envolvem dados textuais -- foco de interesse desta pesquisa. Nas tarefas de biagrupamento e coagrupamento, os critérios de similaridade são aplicados simultaneamente às linhas e às colunas das matrizes de dados, agrupando simultaneamente os objetos e os atributos e possibilitando a criação de bigrupos/cogrupos. Contudo suas definições variam segundo suas naturezas e objetivos, sendo que a tarefa de coagrupamento pode ser vista como uma generalização da tarefa de biagrupamento. Estas tarefas, quando aplicadas nos dados textuais, demandam uma representação em um modelo de espaço vetorial que, comumente, leva à geração de espaços caracterizados pela alta dimensionalidade e esparsidade, afetando o desempenho de muitos dos algoritmos. Este trabalho apresenta uma análise do comportamento do algoritmo para biagrupamento Cheng e Church e do algoritmo para coagrupamento de decomposição de valores em blocos não negativos (\\textit{Non-Negative Block Value Decomposition} - NBVD), aplicado ao contexto de dados textuais. Resultados experimentais quantitativos e qualitativos são apresentados a partir das experimentações destes algoritmos em conjuntos de dados sintéticos criados com diferentes níveis de esparsidade e em um conjunto de dados real. Os resultados são avaliados em termos de medidas próprias de biagrupamento, medidas internas de agrupamento a partir das projeções nas linhas dos bigrupos/cogrupos e em termos de geração de informação. As análises dos resultados esclarecem questões referentes às dificuldades encontradas por estes algoritmos nos ambiente de experimentação, assim como se são capazes de fornecer informações diferenciadas e úteis na área de mineração de texto. De forma geral, as análises realizadas mostraram que o algoritmo NBVD é mais adequado para trabalhar com conjuntos de dados em altas dimensões e com alta esparsidade. O algoritmo de Cheng e Church, embora tenha obtidos resultados bons de acordo com os objetivos do algoritmo, no contexto de dados textuais, propiciou resultados com baixa relevância / Biclustering e coclustering are data mining tasks that allow the extraction of relevant information about data and have been applied successfully in a wide variety of domains, including those involving textual data - the focus of interest of this research. In biclustering and coclustering tasks, similarity criteria are applied simultaneously to the rows and columns of the data matrices, simultaneously grouping the objects and attributes and enabling the discovery of biclusters/coclusters. However their definitions vary according to their natures and objectives, being that the task of coclustering can be seen as a generalization of the task of biclustering. These tasks applied in the textual data demand a representation in a model of vector space, which commonly leads to the generation of spaces characterized by high dimensionality and sparsity and influences the performance of many algorithms. This work provides an analysis of the behavior of the algorithm for biclustering Cheng and Church and the algorithm for coclustering non-negative block decomposition (NBVD) applied to the context of textual data. Quantitative and qualitative experimental results are shown, from experiments on synthetic datasets created with different sparsity levels and on a real data set. The results are evaluated in terms of their biclustering oriented measures, internal clustering measures applied to the projections in the lines of the biclusters/coclusters and in terms of generation of information. The analysis of the results clarifies questions related to the difficulties faced by these algorithms in the experimental environment, as well as if they are able to provide differentiated information useful to the field of text mining. In general, the analyses carried out showed that the NBVD algorithm is better suited to work with datasets in high dimensions and with high sparsity. The algorithm of Cheng and Church, although it obtained good results according to its own objectives, provided results with low relevance in the context of textual data
|
252 |
Apprentissage avec la parcimonie et sur des données incertaines par la programmation DC et DCA / Learning with sparsity and uncertainty by Difference of Convex functions optimizationVo, Xuan Thanh 15 October 2015 (has links)
Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes d'apprentissage avec la parcimonie et/ou avec l'incertitude des données. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithms) étant reconnues comme des outils puissants d'optimisation. La thèse se compose de deux parties : La première partie concerne la parcimonie tandis que la deuxième partie traite l'incertitude des données. Dans la première partie, une étude approfondie pour la minimisation de la norme zéro a été réalisée tant sur le plan théorique qu'algorithmique. Nous considérons une approximation DC commune de la norme zéro et développons quatre algorithmes basées sur la programmation DC et DCA pour résoudre le problème approché. Nous prouvons que nos algorithmes couvrent tous les algorithmes standards existants dans le domaine. Ensuite, nous étudions le problème de la factorisation en matrices non-négatives (NMF) et fournissons des algorithmes appropriés basés sur la programmation DC et DCA. Nous étudions également le problème de NMF parcimonieuse. Poursuivant cette étude, nous étudions le problème d'apprentissage de dictionnaire où la représentation parcimonieuse joue un rôle crucial. Dans la deuxième partie, nous exploitons la technique d'optimisation robuste pour traiter l'incertitude des données pour les deux problèmes importants dans l'apprentissage : la sélection de variables dans SVM (Support Vector Machines) et le clustering. Différents modèles d'incertitude sont étudiés. Les algorithmes basés sur DCA sont développés pour résoudre ces problèmes. / In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in sparsity and robust optimization for data uncertainty. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are well-known as powerful tools in optimization. This thesis is composed of two parts: the first part concerns with sparsity while the second part deals with uncertainty. In the first part, a unified DC approximation approach to optimization problem involving the zero-norm in objective is thoroughly studied on both theoretical and computational aspects. We consider a common DC approximation of zero-norm that includes all standard sparse inducing penalty functions, and develop general DCA schemes that cover all standard algorithms in the field. Next, the thesis turns to the nonnegative matrix factorization (NMF) problem. We investigate the structure of the considered problem and provide appropriate DCA based algorithms. To enhance the performance of NMF, the sparse NMF formulations are proposed. Continuing this topic, we study the dictionary learning problem where sparse representation plays a crucial role. In the second part, we exploit robust optimization technique to deal with data uncertainty for two important problems in machine learning: feature selection in linear Support Vector Machines and clustering. In this context, individual data point is uncertain but varies in a bounded uncertainty set. Different models (box/spherical/ellipsoidal) related to uncertain data are studied. DCA based algorithms are developed to solve the robust problems
|
253 |
Evaluation de l'adhérence au contact roue-rail par analyse d'images spectrales / Wheel-track adhesion evaluation using spectral imagingNicodeme, Claire 04 July 2018 (has links)
L’avantage du train depuis sa création est sa faible résistance à l’avancement du fait du contact fer-fer de la roue sur le rail conduisant à une adhérence réduite. Cependant cette adhérence faible est aussi un inconvénient majeur : étant dépendante des conditions environnementales, elle est facilement altérée lors d’une pollution du rail (végétaux, corps gras, eau, etc.). Aujourd’hui, les mesures prises face à des situations d'adhérence dégradée impactent directement les performances du système et conduisent notamment à une perte de capacité de transport. L’objectif du projet est d’utiliser les nouvelles technologies d’imagerie spectrale pour identifier sur les rails les zones à adhérence réduite et leur cause afin d’alerter et d’adapter rapidement les comportements. La stratégie d’étude a pris en compte les trois points suivants : • Le système de détection, installé à bord de trains commerciaux, doit être indépendant du train. • La détection et l’identification ne doivent pas interagir avec la pollution pour ne pas rendre la mesure obsolète. Pour ce faire le principe d’un Contrôle Non Destructif est retenu. • La technologie d’imagerie spectrale permet de travailler à la fois dans le domaine spatial (mesure de distance, détection d’objet) et dans le domaine fréquentiel (détection et reconnaissance de matériaux par analyse de signatures spectrales). Dans le temps imparti des trois ans de thèse, nous nous sommes focalisés sur la validation du concept par des études et analyses en laboratoire, réalisables dans les locaux de SNCF Ingénierie & Projets. Les étapes clés ont été la réalisation d’un banc d’évaluation et le choix du système de vision, la création d'une bibliothèque de signatures spectrales de référence et le développement d'algorithmes classification supervisées et non supervisées des pixels. Ces travaux ont été valorisés par le dépôt d'un brevet et la publication d'articles dans des conférences IEEE. / The advantage of the train since its creation is in its low resistance to the motion, due to the contact iron-iron of the wheel on the rail leading to low adherence. However this low adherence is also a major drawback : being dependent on the environmental conditions, it is easily deteriorated when the rail is polluted (vegetation, grease, water, etc). Nowadays, strategies to face a deteriorated adherence impact the performance of the system and lead to a loss of transport capacity. The objective of the project is to use a new spectral imaging technology to identify on the rails areas with reduced adherence and their cause in order to quickly alert and adapt the train's behaviour. The study’s strategy took into account the three following points : -The detection system, installed on board of commercial trains, must be independent of the train. - The detection and identification process should not interact with pollution in order to keep the measurements unbiased. To do so, we chose a Non Destructive Control method. - Spectral imaging technology makes it possible to work with both spatial information (distance’s measurement, target detection) and spectral information (material detection and recognition by analysis of spectral signatures). In the assigned time, we focused on the validation of the concept by studies and analyses in laboratory, workable in the office at SNCF Ingénierie & Projets. The key steps were the creation of the concept's evaluation bench and the choice of a Vision system, the creation of a library containing reference spectral signatures and the development of supervised and unsupervised pixels classification. A patent describing the method and process has been filed and published.
|
254 |
Non-negative matrix factorization for integrative clustering / Алгоритми интегративног кластеровања података применом ненегативне факторизације матрице / Algoritmi integrativnog klasterovanja podataka primenom nenegativne faktorizacije matriceBrdar Sanja 15 December 2016 (has links)
<p>Integrative approaches are motivated by the desired improvement of<br />robustness, stability and accuracy. Clustering, the prevailing technique for<br />preliminary and exploratory analysis of experimental data, may benefit from<br />integration across multiple partitions. In this thesis we have proposed<br />integration methods based on non-negative matrix factorization that can fuse<br />clusterings stemming from different data sets, different data preprocessing<br />steps or different sub-samples of objects or features. Proposed methods are<br />evaluated from several points of view on typical machine learning data sets,<br />synthetics data, and above all, on data coming form bioinformatics realm,<br />which rise is fuelled by technological revolutions in molecular biology. For a<br />vast amounts of 'omics' data that are nowadays available sophisticated<br />computational methods are necessary. We evaluated methods on problem<br />from cancer genomics, functional genomics and metagenomics.</p> / <p>Предмет истраживања докторске дисертације су алгоритми кластеровања,<br />односно груписања података, и могућности њиховог унапређења<br />интегративним приступом у циљу повећања поузданости, робустности на<br />присуство шума и екстремних вредности у подацима, омогућавања фузије<br />података. У дисертацији су предложене методе засноване на ненегативној<br />факторизацији матрице. Методе су успешно имплементиране и детаљно<br />анализиране на разноврсним подацима са UCI репозиторијума и<br />синтетичким подацима које се типично користе за евалуацију нових<br />алгоритама и поређење са већ постојећим методама. Већи део<br />дисертације посвећен је примени у домену биоинформатике која обилује<br />хетерогеним подацима и бројним изазовним задацима. Евалуација је<br />извршена на подацима из домена функционалне геномике, геномике рака и<br />метагеномике.</p> / <p>Predmet istraživanja doktorske disertacije su algoritmi klasterovanja,<br />odnosno grupisanja podataka, i mogućnosti njihovog unapređenja<br />integrativnim pristupom u cilju povećanja pouzdanosti, robustnosti na<br />prisustvo šuma i ekstremnih vrednosti u podacima, omogućavanja fuzije<br />podataka. U disertaciji su predložene metode zasnovane na nenegativnoj<br />faktorizaciji matrice. Metode su uspešno implementirane i detaljno<br />analizirane na raznovrsnim podacima sa UCI repozitorijuma i<br />sintetičkim podacima koje se tipično koriste za evaluaciju novih<br />algoritama i poređenje sa već postojećim metodama. Veći deo<br />disertacije posvećen je primeni u domenu bioinformatike koja obiluje<br />heterogenim podacima i brojnim izazovnim zadacima. Evaluacija je<br />izvršena na podacima iz domena funkcionalne genomike, genomike raka i<br />metagenomike.</p>
|
255 |
Méthodes informées de factorisaton matricielle pour l'étalonnage de réseaux de capteurs mobiles et la cartographie de champs de pollution / Informed method of matrix factorization for calibration of mobile sensor networks and pollution fields mappingDorffer, Clément 13 December 2017 (has links)
Le mobile crowdsensing consiste à acquérir des données géolocalisées et datées d'une foule de capteurs mobiles (issus de ou connectés à des smartphones). Dans cette thèse, nous nous intéressons au traitement des données issues du mobile crowdsensing environnemental. En particulier, nous proposons de revisiter le problème d'étalonnage aveugle de capteurs comme un problème informé de factorisation matricielle à données manquantes, où les facteurs contiennent respectivement le modèle d'étalonnage fonction du phénomène physique observé (nous proposons des approches pour des modèles affines et non linéaires) et les paramètres d'étalonnage de chaque capteur. Par ailleurs, dans l'application de surveillance de la qualité de l'air que nous considérons, nous supposons avoir à notre disposition des mesures très précises mais distribuées de manière très parcimonieuse dans le temps et l'espace, que nous couplons aux multiples mesures issues de capteurs mobiles. Nos approches sont dites informées car (i) les facteurs matriciels sont structurés par la nature du problème, (ii) le phénomène observé peut être décomposé sous forme parcimonieuse dans un dictionnaire connu ou approché par un modèle physique/géostatistique, et (iii) nous connaissons la fonction d'étalonnage moyenne des capteurs à étalonner. Les approches proposées sont plus performantes que des méthodes basées sur la complétion de la matrice de données observées ou les techniques multi-sauts de la littérature, basées sur des régressions robustes. Enfin, le formalisme informé de factorisation matricielle nous permet aussi de reconstruire une carte fine du phénomène physique observé. / Mobile crowdsensing aims to acquire geolocated and timestamped data from a crowd of sensors (from or connected to smartphones). In this thesis, we focus on processing data from environmental mobile crowdsensing. In particular, we propose to revisit blind sensor calibration as an informed matrix factorization problem with missing entries, where factor matrices respectively contain the calibration model which is a function of the observed physical phenomenon (we focus on approaches for affine or nonlinear sensor responses) and the calibration parameters of each sensor. Moreover, in the considered air quality monitoring application, we assume to pocee- some precise measurements- which are sparsely distributed in space and time - that we melt with the multiple measurements from the mobile sensors. Our approaches are "informed" because (i) factor matrices are structured by the problem nature, (ii) the physical phenomenon can be decomposed using sparse decomposition with a known dictionary or can be approximated by a physical or a geostatistical model, and (iii) we know the mean calibration function of the sensors to be calibrated. The proposed approaches demonstrate better performances than the one based on the completion of the observed data matrix or the multi-hop calibration method from the literature, based on robust regression. Finally, the informed matrix factorization formalism also provides an accurate reconstruction of the observed physical field.
|
256 |
Paralelização de inferência em redes credais utilizando computação distribuída para fatoração de matrizes esparsas / Parallelization of credal network inference using distributed computing for sparse matrix factorization.Ramon Fortes Pereira 25 April 2017 (has links)
Este estudo tem como objetivo melhorar o desempenho computacional dos algoritmos de inferência em redes credais, aplicando técnicas de computação paralela e sistemas distribuídos em algoritmos de fatoração de matrizes esparsas. Grosso modo, técnicas de computação paralela são técnicas para transformar um sistema em um sistema com algoritmos que possam ser executados concorrentemente. E a fatoração de matrizes são técnicas da matemática para decompor uma matriz em um produto de duas ou mais matrizes. As matrizes esparsas são matrizes que possuem a maioria de seus valores iguais a zero. E as redes credais são semelhantes as redes bayesianas, que são grafos acíclicos que representam uma probabilidade conjunta através de probabilidades condicionais e suas relações de independência. As redes credais podem ser consideradas como uma extensão das redes bayesianas para lidar com incertezas ou a má qualidade dos dados. Para aplicar a técnica de paralelização de fatoração de matrizes esparsas na inferência de redes credais, a inferência utiliza-se da técnica de eliminação de variáveis onde o grafo acíclico da rede credal é associado a uma matriz esparsa e cada variável eliminada é análoga a eliminação de uma coluna. / This study\'s objective is the computational performance improvement of credal network inference algorithms by applying computational parallel and distributed system techniques of sparse matrix factorization algorithms. Roughly, computational parallel techniques are used to transform systems in systems with algorithms that can be executed concurrently. And the matrix factorization is a group of mathematical techniques to decompose a matrix in a product of two or more matrixes. The sparse matrixes are matrixes which have most of their values equal to zero. And credal networks are similar to Bayesian networks, which are acyclic graphs representing a joint probability through conditional probabilities and their independence relations. Credal networks can be considered as a Bayesian network extension because of their manner of leading to uncertainty and the poor data quality. To apply parallel techniques of sparse matrix factorization in credal network inference the variable elimination method was used, where the credal network acyclic graph is associated to a sparse matrix and every eliminated variable is analogous to an eliminated column.
|
257 |
Biagrupamento heurístico e coagrupamento baseado em fatoração de matrizes: um estudo em dados textuais / Heuristic biclustering and coclustering based on matrix factorization: a study on textual dataAlexandra Katiuska Ramos Diaz 16 October 2018 (has links)
Biagrupamento e coagrupamento são tarefas de mineração de dados que permitem a extração de informação relevante sobre dados e têm sido aplicadas com sucesso em uma ampla variedade de domínios, incluindo aqueles que envolvem dados textuais -- foco de interesse desta pesquisa. Nas tarefas de biagrupamento e coagrupamento, os critérios de similaridade são aplicados simultaneamente às linhas e às colunas das matrizes de dados, agrupando simultaneamente os objetos e os atributos e possibilitando a criação de bigrupos/cogrupos. Contudo suas definições variam segundo suas naturezas e objetivos, sendo que a tarefa de coagrupamento pode ser vista como uma generalização da tarefa de biagrupamento. Estas tarefas, quando aplicadas nos dados textuais, demandam uma representação em um modelo de espaço vetorial que, comumente, leva à geração de espaços caracterizados pela alta dimensionalidade e esparsidade, afetando o desempenho de muitos dos algoritmos. Este trabalho apresenta uma análise do comportamento do algoritmo para biagrupamento Cheng e Church e do algoritmo para coagrupamento de decomposição de valores em blocos não negativos (\\textit{Non-Negative Block Value Decomposition} - NBVD), aplicado ao contexto de dados textuais. Resultados experimentais quantitativos e qualitativos são apresentados a partir das experimentações destes algoritmos em conjuntos de dados sintéticos criados com diferentes níveis de esparsidade e em um conjunto de dados real. Os resultados são avaliados em termos de medidas próprias de biagrupamento, medidas internas de agrupamento a partir das projeções nas linhas dos bigrupos/cogrupos e em termos de geração de informação. As análises dos resultados esclarecem questões referentes às dificuldades encontradas por estes algoritmos nos ambiente de experimentação, assim como se são capazes de fornecer informações diferenciadas e úteis na área de mineração de texto. De forma geral, as análises realizadas mostraram que o algoritmo NBVD é mais adequado para trabalhar com conjuntos de dados em altas dimensões e com alta esparsidade. O algoritmo de Cheng e Church, embora tenha obtidos resultados bons de acordo com os objetivos do algoritmo, no contexto de dados textuais, propiciou resultados com baixa relevância / Biclustering e coclustering are data mining tasks that allow the extraction of relevant information about data and have been applied successfully in a wide variety of domains, including those involving textual data - the focus of interest of this research. In biclustering and coclustering tasks, similarity criteria are applied simultaneously to the rows and columns of the data matrices, simultaneously grouping the objects and attributes and enabling the discovery of biclusters/coclusters. However their definitions vary according to their natures and objectives, being that the task of coclustering can be seen as a generalization of the task of biclustering. These tasks applied in the textual data demand a representation in a model of vector space, which commonly leads to the generation of spaces characterized by high dimensionality and sparsity and influences the performance of many algorithms. This work provides an analysis of the behavior of the algorithm for biclustering Cheng and Church and the algorithm for coclustering non-negative block decomposition (NBVD) applied to the context of textual data. Quantitative and qualitative experimental results are shown, from experiments on synthetic datasets created with different sparsity levels and on a real data set. The results are evaluated in terms of their biclustering oriented measures, internal clustering measures applied to the projections in the lines of the biclusters/coclusters and in terms of generation of information. The analysis of the results clarifies questions related to the difficulties faced by these algorithms in the experimental environment, as well as if they are able to provide differentiated information useful to the field of text mining. In general, the analyses carried out showed that the NBVD algorithm is better suited to work with datasets in high dimensions and with high sparsity. The algorithm of Cheng and Church, although it obtained good results according to its own objectives, provided results with low relevance in the context of textual data
|
258 |
Méthodes avancées de séparation de sources applicables aux mélanges linéaires-quadratiques / Advanced methods of source separation applicable to linear-quadratic mixturesJarboui, Lina 18 November 2017 (has links)
Dans cette thèse, nous nous sommes intéressés à proposer de nouvelles méthodes de Séparation Aveugle de Sources (SAS) adaptées aux modèles de mélange non-linéaires. La SAS consiste à estimer les signaux sources inconnus à partir de leurs mélanges observés lorsqu'il existe très peu d'informations disponibles sur le modèle de mélange. La contribution méthodologique de cette thèse consiste à prendre en considération les interactions non-linéaires qui peuvent se produire entre les sources en utilisant le modèle linéaire-quadratique (LQ). A cet effet, nous avons développé trois nouvelles méthodes de SAS. La première méthode vise à résoudre le problème du démélange hyperspectral en utilisant un modèle linéaire-quadratique. Celle-ci se repose sur la méthode d'Analyse en Composantes Parcimonieuses (ACPa) et nécessite l'existence des pixels purs dans la scène observée. Dans le même but, nous proposons une deuxième méthode du démélange hyperspectral adaptée au modèle linéaire-quadratique. Elle correspond à une méthode de Factorisation en Matrices Non-négatives (FMN) se basant sur l'estimateur du Maximum A Posteriori (MAP) qui permet de prendre en compte les informations a priori sur les distributions des inconnus du problème afin de mieux les estimer. Enfin, nous proposons une troisième méthode de SAS basée sur l'analyse en composantes indépendantes (ACI) en exploitant les Statistiques de Second Ordre (SSO) pour traiter un cas particulier du mélange linéaire-quadratique qui correspond au mélange bilinéaire. / In this thesis, we were interested to propose new Blind Source Separation (BSS) methods adapted to the nonlinear mixing models. BSS consists in estimating the unknown source signals from their observed mixtures when there is little information available on the mixing model. The methodological contribution of this thesis consists in considering the non-linear interactions that can occur between sources by using the linear-quadratic (LQ) model. To this end, we developed three new BSS methods. The first method aims at solving the hyperspectral unmixing problem by using a linear-quadratic model. It is based on the Sparse Component Analysis (SCA) method and requires the existence of pure pixels in the observed scene. For the same purpose, we propose a second hyperspectral unmixing method adapted to the linear-quadratic model. It corresponds to a Non-negative Matrix Factorization (NMF) method based on the Maximum A Posteriori (MAP) estimate allowing to take into account the available prior information about the unknown parameters for a better estimation of them. Finally, we propose a third BSS method based on the Independent Component Analysis (ICA) method by using the Second Order Statistics (SOS) to process a particular case of the linear-quadratic mixture that corresponds to the bilinear one.
|
259 |
Designing an overlay hybrid cognitive radio including channel estimation issuesAbdou, Ahmed 04 December 2014 (has links)
La radio intelligente (RI) a été proposée pour améliorer l’utilisation du spectre radiofréquence. Pour cela, il s’agit de donner un accès opportuniste aux utilisateurs non licenciés (nommés utilisateurs secondaires) au spectre alloué à l’utilisateur licencié (nommé utilisateur primaire). Dans cette thèse, notre but est de proposer un scénario spécifique à la RI et de présenter des solutions à certains problèmes connexes. Pour cela, nous considérons une RI émettant ses informations en “sur-couche” des utilisateurs primaires (technique overlay). Le système étudié est constitué d’une macro-cellule primaire et de petites cellules cognitives secondaires équipées de stations de base coopérant ensemble. Nous suggérons l’étude d’un schéma de communication hybride où une modulation “FilterBanc Multi Carrier” (FBMC) est utilisée pour les utilisateurs secondaires, alors que dans le cas des utilisateurs primaires, une modulation “Orthogonal Frequency Division Multiplexing”(OFDM) est adoptée. Ce choix est motivé par les raisons suivantes: l’OFDM est utilisée dans de nombreux systèmes primaires actuels large bande; ainsi lorsque l’OFDM est considérée au niveau de l’utilisateur primaire, une bande passante importante peut être réutilisée. Concernant le système secondaire, bien que l’OFDM ait été reconnue comme forme d’onde éligible aux systèmes de la RI, la modulation FBMC peut être une autre candidate capable de palier certains défauts de l’OFDM. En effet, comparée à l’OFDM,la modulation FBMC a l’avantage de réduire le niveau d’interférences de l’utilisateur secondairequi est induit par la différence de fréquence des oscillateurs locaux équipant les stations de base secondaires et les utilisateurs primaires. Pour annuler ces interférences,un précodage peut être inséré au niveau des stations de base secondaires. Par conséquent,nous proposons de calculer l’expression des interférences dues au systéme secondaire au niveau du récepteur primaire. A partir de ce résultat nous proposons d’annuler les interférences en utilisant la méthode “Zero Forcing Beamforming” (ZFBF) . Afin de confirmer l’efficacité du système proposé, nous le comparons avec un système fondé sur une RIutilisant une modulation OFDM à la fois au primaire et au secondaire.Toutefois, l’application de la méthode ZFBF dépend des canaux entre les stations de base secondaires et les utilisateurs primaires avec lesquels on souhaite s’orthogonaliser. Une estimation de canal est donc nécessaire. Pour ce faire, nous proposons de modéliser le canal par un processus autorégressif (AR) et d’aborder l’estimation du canal en utilisant une séquence d’apprentissage. Les signaux reçus, appelés aussi “observations”, sont perturbés par un bruit de mesure additif. / Cognitive radio (CR) has been proposed as a technolgy to improve the spectrum efficiency by giving an opportunistic access of the licensed-user spectra to unlicensed users. In this thesis, our purpose is to propose a specific scenario of CR and to present solutions to some related problems .or this purpose, we consider an overlay CR consisting of a primary macro-cell and cognitive small cells of cooperative secondary base stations (SBS). We suggst studying a hydbrid CR where a filter bank multicarrier (FBMC) is used for the secondary users (SU) whereas the primary users (PU) are based on orthogonal frequency diiision multiplexing (OFDM). This choice is motivated by the following reasons : as OFDM is used in many current wideband primary systems, an important bandwidth can be reused when OFDM is considered for the PU. Concerning the secondary system, although OFDM has been recognized as a condidate for CR systems, FBMC modulation can be another candidate that overcomes some OFDM drawbacks. Indeed, compared to OFDM, FBMC has the advantage of reducing the SU interference level tha is induced by the differences between the SBS and PU carrier frequency offsets (CFO). In order to cancel the interferences, a precoding can ben inserted at the SBS. Therefore, we propose to derive the interference expression due to SU at het PU receiver. Then, ero forcing beamforming (ZFBF) is considered to cancel the interferences. To confirm the efficiency of the proposed cheme, we make a comparative study with CR based on OFDM for both the PU and theSU. However, applying ZFBF depends on the channels between the SBS and the PU. A channel estimation is hence necessary. For this purpose, we propose to approximate the channel by an autoregressive process (AR) and to consider the channel estimation issue by using a training sequence. The received signals, also called the observations, are disturbed by an additive measurement noise. They can be : 1) additive and white. In that case, the AR paramters and the channel can be jointly estimated from the received noisy signal by using e recursive approache. Neverless, the corresponding state space representaion of the system is non-linear. In addition to existing methods that have been considered, we propose to carry out a complementary study by investigating the relevance of the quadrature Kalman filter (QKF) and the cubature Kalman filter (CKF). The, we compare them with other non-linear Kalman based approaches. 2) additive and colored. In that case, a parametric approach can be considered and based on a priori model of the noise. In our case, a moving average (MA) model is studie. Our approach operates as follows : firstly, the AR parameters are estimated by using the overdetermined high-order Yule-Walker (HOYN) equations. The variance of the AR-process driving process can be deduced by means of an orthogonal projection betweenn two types of etimates of AR-process correlation vectors. Then, the correlation sequence of th MA noise is estimated. Secondly, the MA parameters are obtained by using a new variant of the inner-outer factorization approache. We study the avantages and the limits of the proposed method. The, we compare it with existing algorithms such as the improved least square-colored noise (ILS-CN), the Yule-Walker ILS (YWILS) and the prediction error method (PEM). The proposed method is first evaluated with synthetic AR and MA processes and then is applied in the field of mobile communcation for channel estimation.
|
260 |
Investigating the large N limit of SU(N) Yang-Mills gauge theories on the latticeGarcía Vera, Miguel Francisco 02 August 2017 (has links)
In dieser Arbeit praesentieren wir Resultate der topologischen Suszeptibilitaet “chi” und untersuchen die Faktorisierung der reinen SU(N) Yang-Mills Eichtheorie im 't Hooft'schen Grenzwert grosser N. Ein entscheidender Teil der Berechnung von chi in der Gittereichtheorie ist die Abschaetzung des topologischen Ladungsdichtekorrelators, die durch ein schlechtes Signal-Rausch- Verhaeltnis beeintraechtigt ist. Um dieses Problem abzuschwaechen, fuehren wir einen neuen, auf einem mehrstufigen Vorgehen beruhenden Algorithmus ein, um die Korrelationsfunktion von Observablen zu berechnen, die mit dem Yang-Mills Gradientenfluss geglaettet wurden. Angewandt auf unsere Observablen, erhalten wir Ergebnisse, deren Fehlerskalierung besser ist, als die von herkoemmlichen Monte-Carlo Simulationen.
Wir bestimmen die topologische Suszeptibilitaet in der reinen Yang-Mills Eichtheorie fuer Eichgruppen mit N = 4,5,6 und drei verschiedenen Gitterabstaenden. Um das Einfrieren der Topologie zu umgehen, wenden wir offene Randbedingungen an. Zusaetzlich wenden wir die korrekte Definition der topologischen Ladungsdichte durch den Gradientenfluss an. Unser Endresultat im des Grenzfalls von grossen N repraesentiert eine neue Qualitaet in der Verifikation der Witten-Veneziano Formel.
Schliesslich benutzen wir die Gitterformulierung, um die Erwartungswertfaktorisierung des Produkts eichinvarianter Operatoren im Grenzwert grosser N zu verifizieren. Wir arbeiten mit durch den Yang-Mills Grandientenfluss geglaetteten Wilsonschleifen und Simulationen bis zur Eichgruppe SU(8). Die Extrapolationen zu grossen N sind in Ueberstimmung mit der Faktorisierung sowohl fuer endlichen Gitterabstand als auch in Kontinnumslimes. Unsere Daten erlauben uns nicht nur die Verifizierung der Faktorisierung, sondern auch einen hochpraezisen Test des 1/N Skalierungsverhaltens. Hier konnten wir das quadratische Skalierungsverhalten in 1/N finden, welches von 't Hooft vorhergesagt wurde. / In this thesis we present results for the topological susceptibility “chi”, and investigate the property of factorization in the 't Hooft large N limit of SU(N) pure Yang-Mills gauge theory. A key component in the lattice gauge theory computation of chi is the estimation of the topological charge density correlator, which is affected by a severe signal to noise problem. To alleviate this problem, we introduce a novel algorithm that uses a multilevel type approach to compute the correlation function of observables smoothed with the Yang-Mills gradient flow. When applied to our observables, the results show an scaling of the error which is better than the one of standard Monte-Carlo simulations.
We compute the topological susceptibility in the pure Yang-Mills gauge theory for the gauge groups with N = 4, 5, 6 and three different lattice spacings. In order to deal with the freezing of topology, we use open boundary conditions. In addition, we employ the theoretically sound definition of the topological charge density through the gradient flow. Our final result in the limit N to infinity, represents a new quality in the verification of the Witten-Veneziano formula.
Lastly, we use the lattice formulation to verify the factorization of the expectation value of the product of gauge invariant operators in the large N limit. We work with Wilson loops smoothed with the Yang-Mills gradient flow and simulations up to the gauge group SU(8). The large N extrapolations at finite lattice spacing and in the continuum are compatible with factorization. Our data allow us not only to verify factorization, but also to test the 1/N scaling up to very high precision, where we find it to agree very well with a quadratic series in 1/N as predicted originally by 't Hooft for the case of the pure Yang-Mills gauge theory.
|
Page generated in 0.1221 seconds