1 |
Two New Applications of Tensors to Machine Learning for Wireless CommunicationsBhogi, Keerthana 09 September 2021 (has links)
With the increasing number of wireless devices and the phenomenal amount of data that is being generated by them, there is a growing interest in the wireless communications community to complement the traditional model-driven design approaches with data-driven machine learning (ML)-based solutions. However, managing the large-scale multi-dimensional data to maintain the efficiency and scalability of the ML algorithms has obviously been a challenge. Tensors provide a useful framework to represent multi-dimensional data in an integrated manner by preserving relationships in data across different dimensions. This thesis studies two new applications of tensors to ML for wireless communications where the tensor structure of the concerned data is exploited in novel ways.
The first contribution of this thesis is a tensor learning-based low-complexity precoder codebook design technique for a full-dimension multiple-input multiple-output (FD-MIMO) system with a uniform planar antenna (UPA) array at the transmitter (Tx) whose channel distribution is available through a dataset. Represented as a tensor, the FD-MIMO channel is further decomposed using a tensor decomposition technique to obtain an optimal precoder which is a function of Kronecker-Product (KP) of two low-dimensional precoders, each corresponding to the horizontal and vertical dimensions of the FD-MIMO channel. From the design perspective, we have made contributions in deriving a criterion for optimal product precoder codebooks using the obtained low-dimensional precoders. We show that this product codebook design problem is an unsupervised clustering problem on a Cartesian Product Grassmann Manifold (CPM), where the optimal cluster centroids form the desired codebook. We further simplify this clustering problem to a $K$-means algorithm on the low-dimensional factor Grassmann manifolds (GMs) of the CPM which correspond to the horizontal and vertical dimensions of the UPA, thus significantly reducing the complexity of precoder codebook construction when compared to the existing codebook learning techniques.
The second contribution of this thesis is a tensor-based bandwidth-efficient gradient communication technique for federated learning (FL) with convolutional neural networks (CNNs). Concisely, FL is a decentralized ML approach that allows to jointly train an ML model at the server using the data generated by the distributed users coordinated by a server, by sharing only the local gradients with the server and not the raw data. Here, we focus on efficient compression and reconstruction of convolutional gradients at the users and the server, respectively. To reduce the gradient communication overhead, we compress the sparse gradients at the users to obtain their low-dimensional estimates using compressive sensing (CS)-based technique and transmit to the server for joint training of the CNN. We exploit a natural tensor structure offered by the convolutional gradients to demonstrate the correlation of a gradient element with its neighbors. We propose a novel prior for the convolutional gradients that captures the described spatial consistency along with its sparse nature in an appropriate way. We further propose a novel Bayesian reconstruction algorithm based on the Generalized Approximate Message Passing (GAMP) framework that exploits this prior information about the gradients. Through the numerical simulations, we demonstrate that the developed gradient reconstruction method improves the convergence of the CNN model. / Master of Science / The increase in the number of wireless and mobile devices have led to the generation of massive amounts of multi-modal data at the users in various real-world applications including wireless communications. This has led to an increasing interest in machine learning (ML)-based data-driven techniques for communication system design. The native setting of ML is {em centralized} where all the data is available on a single device. However, the distributed nature of the users and their data has also motivated the development of distributed ML techniques. Since the success of ML techniques is grounded in their data-based nature, there is a need to maintain the efficiency and scalability of the algorithms to manage the large-scale data. Tensors are multi-dimensional arrays that provide an integrated way of representing multi-modal data. Tensor algebra and tensor decompositions have enabled the extension of several classical ML techniques to tensors-based ML techniques in various application domains such as computer vision, data-mining, image processing, and wireless communications. Tensors-based ML techniques have shown to improve the performance of the ML models because of their ability to leverage the underlying structural information in the data.
In this thesis, we present two new applications of tensors to ML for wireless applications and show how the tensor structure of the concerned data can be exploited and incorporated in different ways. The first contribution is a tensor learning-based precoder codebook design technique for full-dimension multiple-input multiple-output (FD-MIMO) systems where we develop a scheme for designing low-complexity product precoder codebooks by identifying and leveraging a tensor representation of the FD-MIMO channel. The second contribution is a tensor-based gradient communication scheme for a decentralized ML technique known as federated learning (FL) with convolutional neural networks (CNNs), where we design a novel bandwidth-efficient gradient compression-reconstruction algorithm that leverages a tensor structure of the convolutional gradients. The numerical simulations in both applications demonstrate that exploiting the underlying tensor structure in the data provides significant gains in their respective performance criteria.
|
2 |
Data mining and predictive analytics application on cellular networks to monitor and optimize quality of service and customer experienceMuwawa, Jean Nestor Dahj 11 1900 (has links)
This research study focuses on the application models of Data Mining and Machine Learning covering cellular network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms have been applied on real cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: RStudio for Machine Learning and process visualization, Apache Spark, SparkSQL for data and big data processing and clicData for service Visualization. Two use cases have been studied during this research. In the first study, the process of Data and predictive Analytics are fully applied in the field of Telecommunications to efficiently address users’ experience, in the goal of increasing customer loyalty and decreasing churn or customer attrition. Using real cellular network transactions, prediction analytics are used to predict customers who are likely to churn, which can result in revenue loss. Prediction algorithms and models including Classification Tree, Random Forest, Neural Networks and Gradient boosting have been used with an
exploratory Data Analysis, determining relationship between predicting variables. The data is segmented in to two, a training set to train the model and a testing set to test the model. The evaluation of the best performing model is based on the prediction accuracy, sensitivity, specificity and the Confusion Matrix on the test set. The second use case analyses Service Quality Management using modern data mining techniques and the advantages of in-memory big data processing with Apache Spark and SparkSQL to save cost on tool investment; thus, a low-cost Service Quality Management model is proposed and analyzed. With increase in Smart phone adoption, access to mobile internet services, applications such as streaming, interactive chats require a certain service level to ensure customer satisfaction. As a result, an SQM framework is developed with Service Quality Index (SQI) and Key Performance Index (KPI). The research concludes with recommendations and future studies around modern technology applications in Telecommunications including Internet of Things (IoT), Cloud and recommender systems. / Cellular networks have evolved and are still evolving, from traditional GSM (Global System for Mobile Communication) Circuit switched which only supported voice services and extremely low data rate, to LTE all Packet networks accommodating high speed data used for various service applications such as video streaming, video conferencing, heavy torrent download; and for say in a near future the roll-out of the Fifth generation (5G) cellular networks, intended to support complex technologies such as IoT (Internet of Things), High Definition video streaming and projected to cater massive amount of data. With high demand on network services and easy access to mobile phones, billions of transactions are performed by subscribers. The transactions appear in the form of SMSs, Handovers, voice calls, web browsing activities, video and audio streaming, heavy downloads and uploads. Nevertheless, the stormy growth in data traffic and the high requirements of new services introduce bigger challenges to Mobile Network Operators (NMOs) in analysing the big data traffic flowing in the network. Therefore, Quality of Service (QoS) and Quality of Experience (QoE) turn in to a challenge. Inefficiency in mining, analysing data and applying predictive intelligence on network traffic can produce high rate of unhappy customers or subscribers, loss on revenue and negative services’ perspective. Researchers and Service Providers are investing in Data mining,
Machine Learning and AI (Artificial Intelligence) methods to manage services and experience. This research study focuses on the application models of Data Mining and Machine Learning covering network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms will be applied on cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: R-Studio for Machine Learning, Apache Spark, SparkSQL for data processing and clicData for Visualization. / Electrical and Mining Engineering / M. Tech (Electrical Engineering)
|
3 |
Apprentissage et annulation des bruits impulsifs sur un canal CPL indoor en vue d'améliorer la QoS des flux audiovisuels / Teaching and cancelling impulsive noise on an indoor PLC channel to improve the QoS of audiovisual flowsFayad, Farah 02 April 2012 (has links)
Le travail présenté dans cette thèse a pour objectif de proposer et d'évaluer les performances de différentes techniques de suppression de bruit impulsif de type asynchrone adaptées aux transmissions sur courants porteurs en lignes (CPL) indoor. En effet, outre les caractéristiques physiques spécifiques à ce type de canal de transmission, le bruit impulsif asynchrone reste la contrainte sévère qui pénalise les systèmes CPL en terme de QoS. Pour remédier aux dégradations dues aux bruits impulsifs asynchrones, les techniques dites de retransmission sont souvent très utilisées. Bien qu'elles soient efficaces, ces techniques de retransmission conduisent à une réduction de débit et à l’introduction de délais de traitement supplémentaires pouvant être critiques pour des applications temps réel. Par ailleurs, plusieurs solutions alternatives sont proposées dans la littérature pour minimiser l'impact du bruit impulsif sur les transmissions CPL. Cependant, le nombre de techniques, qui permettent d'obtenir un bon compromis entre capacité de correction et complexité d'implantation reste faible pour les systèmes CPL. Dans ce contexte, nous proposons dans un premier temps d'utiliser un filtre linéaire adaptatif : le filtre de Widrow, nommé aussi ADALINE (ADAptive LInear NEuron), que nous utilisons comme méthode de débruitage pour les systèmes CPL. Pour améliorer les performances du débruitage effectué à l'aide d'ADALINE, nous proposons d'utiliser un réseau de neurones (RN) non linéaire comme méthode de débruitage. Le réseau de neurones est un bon outil qui est une généralisation de la structure du filtre ADALINE. Dans un deuxième temps, pour améliorer les performances du débruitage par un réseau de neurones, nous proposons un procédé d'annulation du bruit impulsif constitué de deux algorithmes : EMD (Empirical Mode Decomposition) associé à un réseau de neurones de type perceptron multicouches. L'EMD effectue le prétraitement en décomposant le signal bruité en signaux moins complexes et donc plus facilement analysables. Après quoi le réseau de neurones effectue le débruitage. Enfin, nous proposons une méthode d'estimation du bruit impulsif utilisant la méthode GPOF (Generalized Pencil Of Function). L'efficacité des deux méthodes, EMD-RN et la technique utilisant l'algorithme GPOF, est évaluée en utilisant une chaîne de simulation de transmission numérique compatible avec le standard HPAV. / The aim of our thesis is to propose and to evaluate the performances of some asynchronous impulsive noise mitigation techniques for transmission over indoor power lines. Indeed, besides the particular physical properties that characterize this transmission channel type, asynchronous impulsive noise remains the difficult constraint to overcome on power lines communications (PLC). Usually, the impact of asynchronous impulsive disturbances over power lines is partly compensated by means of retransmission mechanisms. However, the main drawbacks of the use of retransmission solutions for impulsive noise mitigation are the bitrate loss and the induced time delays that may be prohibitive for real-time services. Although several other countering strategies are proposed in the literature, only very few of them have a good compromise between correction capability and implementing complexity for PLC systems. In this context, we proposed an adaptive linear filter, the Widrow filter, also known as ADALINE (Adaptive LInear neurons), as a denoising method for PLC systems. To improve the performance of the denoising method using ADALINE, we proposed to use a neural network (NN) as a nonlinear denoising method. The neural network is a good generalization of the ADALINE filter. In a second step, to improve the performances of denoising by NN, we proposed a combined denoising method based on EMD (Empirical Mode Decomposition) and MLPNN (Multi Layer Perceptron Neural Network). The noised signal is pre-processed by EMD which decomposes it into signals less complex and therefore more easily analyzed. Then the MLPNN denoises it. Finally, we proposed an asynchronous impulsive noise estimation method using the GPOF method (Generalized Pencil Of Function). The performances of the two methods, EMD-MLPNN and GPOF technique, are evaluated using a PLC transmission chain compatible with the HPAV standard.
|
Page generated in 0.0389 seconds