231 |
CoreLB: uma proposta de balanceamento de carga na rede com Openflow e SNMPDossa, Clebio Gavioli 18 August 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-11-01T15:35:45Z
No. of bitstreams: 1
Clebio Dossa_.pdf: 1252617 bytes, checksum: 784b95c29ee09e2a922686b26cb7aa51 (MD5) / Made available in DSpace on 2016-11-01T15:35:45Z (GMT). No. of bitstreams: 1
Clebio Dossa_.pdf: 1252617 bytes, checksum: 784b95c29ee09e2a922686b26cb7aa51 (MD5)
Previous issue date: 2016-08-18 / Nenhuma / Atualmente, muitos serviços distribuem a carga entre diversos nós computacionais direcionando as conexões com alguma estratégia de balanceamento para divisão da carga. O advento do uso de redes definidas por software (SDN) está mudando paradigmas da administração de redes, absorvendo serviços especializados, automatizando processos e gerando inteligência para regras estáticas com uma grande variedade de opções de implementação. O balanceamento de carga é um dos serviços especializados que pode usufruir dos conceitos de SDN, sem definições e processos estáticos como ocorre muitas vezes nos atuais modelos usados de balanceamento de carga. A definição dos protocolos que suportam SDN usualmente permitem soluções alternativas e eficientes para este problema, desta forma, neste trabalho, é apresentada uma proposta de metodologia para balanceamento de carga entre distintos servidores de um pool com a troca do destino de tráfego realizada pela rede. Esta solução é chamada Core-based load balance (CoreLB), pois o serviço especializado de balanceamento de carga é realizado pela rede onde a administração de pacotes é nativamente realizada. A metodologia faz uso do protocolo SNMP para análise de recursos dos servidores com o objetivo de avaliar a situação de carga de cada nó computacional e de estatísticas de consumo de rede através do protocolo OpenFlow. Este trabalho avaliou o balanceamento de carga em serviços Web e a união de estatísticas de rede e da carga dos servidores, para a tomada de decisão de balanceamento, mostra-se uma metodologia eficiente e com melhores tempos de resposta ao usuário comparado com outras metodologias de avaliadas. Também melhorou a distribuição de consumo de recursos entre os servidores. / Currently, most services balance the load between distinct hosts forwarding connections with a load balance strategy in front. Usually, a dedicated appliance is responsible to performthe balance and may be a fault point and become expensive. The new concepts of computer network architecture with Software-Defined Networking (SND) are changing the network management, absorving specialist services, automating process and building intelligence to statics rules with loads of delivery options. The load balance is a specialized service that can enjoy in a positive way of SDN concepts, with low costs, in a flexible way as per the process needs instead of a plastered process definitions that occurs in many actual models. The OpenFlow protocol definition allow us to use a new solution to address this issue. This work shows a load balance purpose between distinct hosts with the destination change of connections made by the network core. It calls Core-based load balance (CoreLB) because the specialized load balance service move to the network core where the package forwarding is naturally made. This solution intend to use the SNMP protocol to analyse the hosts resources to evaluate server’s load. Using the network forwarding statistics and OS load informations, an efficient solution of load balance, the metodology proved to be efficient with better users’ response times average of 19% than no balanced scenario as well as around 9% better than others load balance strategies and a properly balance consumption of resources from hosts side. This process can be inhered in distinct models, however, this research intend to evaluate Web Services.
|
232 |
Data mining and predictive analytics application on cellular networks to monitor and optimize quality of service and customer experienceMuwawa, Jean Nestor Dahj 11 1900 (has links)
This research study focuses on the application models of Data Mining and Machine Learning covering cellular network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms have been applied on real cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: RStudio for Machine Learning and process visualization, Apache Spark, SparkSQL for data and big data processing and clicData for service Visualization. Two use cases have been studied during this research. In the first study, the process of Data and predictive Analytics are fully applied in the field of Telecommunications to efficiently address users’ experience, in the goal of increasing customer loyalty and decreasing churn or customer attrition. Using real cellular network transactions, prediction analytics are used to predict customers who are likely to churn, which can result in revenue loss. Prediction algorithms and models including Classification Tree, Random Forest, Neural Networks and Gradient boosting have been used with an
exploratory Data Analysis, determining relationship between predicting variables. The data is segmented in to two, a training set to train the model and a testing set to test the model. The evaluation of the best performing model is based on the prediction accuracy, sensitivity, specificity and the Confusion Matrix on the test set. The second use case analyses Service Quality Management using modern data mining techniques and the advantages of in-memory big data processing with Apache Spark and SparkSQL to save cost on tool investment; thus, a low-cost Service Quality Management model is proposed and analyzed. With increase in Smart phone adoption, access to mobile internet services, applications such as streaming, interactive chats require a certain service level to ensure customer satisfaction. As a result, an SQM framework is developed with Service Quality Index (SQI) and Key Performance Index (KPI). The research concludes with recommendations and future studies around modern technology applications in Telecommunications including Internet of Things (IoT), Cloud and recommender systems. / Cellular networks have evolved and are still evolving, from traditional GSM (Global System for Mobile Communication) Circuit switched which only supported voice services and extremely low data rate, to LTE all Packet networks accommodating high speed data used for various service applications such as video streaming, video conferencing, heavy torrent download; and for say in a near future the roll-out of the Fifth generation (5G) cellular networks, intended to support complex technologies such as IoT (Internet of Things), High Definition video streaming and projected to cater massive amount of data. With high demand on network services and easy access to mobile phones, billions of transactions are performed by subscribers. The transactions appear in the form of SMSs, Handovers, voice calls, web browsing activities, video and audio streaming, heavy downloads and uploads. Nevertheless, the stormy growth in data traffic and the high requirements of new services introduce bigger challenges to Mobile Network Operators (NMOs) in analysing the big data traffic flowing in the network. Therefore, Quality of Service (QoS) and Quality of Experience (QoE) turn in to a challenge. Inefficiency in mining, analysing data and applying predictive intelligence on network traffic can produce high rate of unhappy customers or subscribers, loss on revenue and negative services’ perspective. Researchers and Service Providers are investing in Data mining,
Machine Learning and AI (Artificial Intelligence) methods to manage services and experience. This research study focuses on the application models of Data Mining and Machine Learning covering network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms will be applied on cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: R-Studio for Machine Learning, Apache Spark, SparkSQL for data processing and clicData for Visualization. / Electrical and Mining Engineering / M. Tech (Electrical Engineering)
|
233 |
Análise do consumo energético em redes subaquáticas utilizando códigos fontanais / Energy consumption analysis of underwater acoustic networks using fountain codesSimão, Daniel Hayashida 06 February 2017 (has links)
O presente trabalho aborda a aplicação de códigos fontanais em redes subaquáticas. Tais redes transmitem dados abaixo da água fazendo uso de sinais acústicos e possuem diversas aplicações. No entanto, é sabido que esse tipo de rede é caracterizado por uma baixa velocidade de propagação e largura de banda menor que as redes que operam em meios de transmissão mais conhecidos, tais como a transmissão sem fio via ondas de rádio frequência, resultando num maior atraso na entrega de pacotes. Para tentar minimizar estes atrasos e aumentar a eficiência energética das redes subaquáticas, o trabalho otimizou o sistema de transmissão inserindo um código corretor de erros fontanal no transmissor de mensagens. Dentro desse contexto, foi necessário modelar o consumo energético necessário para a transmissão correta de pacotes de dados em redes subaquáticas utilizando códigos fontanais. Dentre os resultados do trabalho, o mais relevante conclui que o uso dos códigos fontanais é capaz de reduzir em até 30% o consumo de energia quando a distância de transmissão é de 20 km para o caso com a taxa de erro de quadro alvo (FER) de Po = 10^−5, e em ate 25% para a FER alvo de Po = 10^−3. / The present work employs fountain codes in an underwater network, in which data is transmitted using acoustic signals and has many applications. However, underwater networks are usually characterized by low propagation speed and smaller bandwidth than networks that use radio frequency signals, resulting in larger transmission delays. Then, aiming at minimizing the delays and increasing the energy efficiency of underwater networks, the present work employs fountain error-correcting codes at the transmitter. To that end, it was first necessary to model the energy consumption of a success data packet transmission in an underwater network using fountain codes. Our results show that the use of fountain codes is able to reduce up to 30% of energy consumption when the transmission distance is of 20 km for the case with a target frame error rate (FER) of Po = 10^−5 , and 25% for the same distance with a target FER of Po = 10^−3.
|
234 |
Data center optical networks : short- and long-term solutions / Réseaux optiques pour les centres de données : solutions à court et long termeMestre Adrover, Miquel Angel 21 October 2016 (has links)
Les centres de données deviennent de plus en plus importants, allant de petites fermes de serveurs distribuées à des grandes fermes dédiées à des tâches spécifiques. La diffusion de services "dans le nuage" conduit à une augmentation incessante de la demande de trafic dans les centres de données. Dans cette thèse, nous étudions l'évolution des réseaux dans les centres de données et proposons des solutions à court et à long terme pour leur intra-connexion physique. Aujourd'hui, la croissance de la demande de trafic met en lumière la nécessité urgente d’interfaces à grande vitesse capables de faire face à la bande passante exigeant de nouvelles applications. Ainsi, à court terme, nous proposons de nouveaux transpondeurs optiques à haut débit, mais à faible coût, permettant la transmission de 200 Gb /s utilisant des schémas de modulation en intensité et à détection directe. Plusieurs types de modulations d’impulsions en amplitude avancées sont explorés, tout en augmentant la vitesse à des débits symboles allant jusqu’à 100 GBd. La génération électrique à haute vitesse est réalisé grâce à un nouveau convertisseur analogique-numérique intégré, capable de doubler les vitesses des entrées et de générer des signaux à plusieurs niveaux d’amplitude. Cependant, le trafic continuera sa croissance. Les centres de données actuels reposent sur plusieurs niveaux de commutateurs électroniques pour construire un réseau d'interconnexion capable de supporter une telle grande quantité de trafic. Dans une telle architecture, la croissance du trafic est directement liée à une augmentation du nombre des composants du réseau, y-compris les commutateurs avec plus de ports, les interfaces et les câbles. Le coût et la consommation d'énergie qui peut être attendus à l'avenir est intenable, ce qui appelle à une réévaluation du réseau. Par conséquent, nous présentons ensuite un nouveau concept fondé sur la commutation de "slots" optiques (Burst Optical Slot Switching, i.e. BOSS) dans lequel les serveurs sont connectés via des nœuds BOSS à travers des anneaux de fibres multiplexé en longueur d'onde et en temps, et organisés dans une topologie en tore. Au cours de cette thèse, nous étudions la mise en œuvre des nœuds BOSS; en particulier, la matrice de commutation et les transpondeurs optiques. L'élément principal au sein de la matrice de commutation est le bloqueur de slots, qui est capable d'effacer n’importe quel paquet (slot) sur n’importe quelle longueur d'onde en quelques nanosecondes seulement. D'une part, nous explorons l'utilisation d'amplificateurs optiques à semi-conducteurs comme portes optiques à utiliser dans le bloqueur des slots, et étudier leur cascade. D'autre part, nous développons un bloqueur de slots intégré monolithiquement capable de gérer jusqu'à seize longueurs d'onde avec la diversité de polarisation. Ensuite, nous présentons plusieurs architectures de transpondeur et nous étudions leur performance. La signalisation des transpondeurs doit répondre à deux exigences principales: le fonctionnement en mode paquet et la résistance au filtrage serré. D'abord, nous utilisons des transpondeurs élastiques qui utilisent des modulations Nyquist N-QAM, et qui adaptent le format de modulation en fonction du nombre de nœuds à traverser. Ensuite, nous proposons l'utilisation du multiplexage par répartition orthogonale de la fréquence en cohérence optique (CO-OFDM). Avec une structure de paquet inhérente et leur grande adaptabilité fréquentielle, nous démontrons que les transpondeurs CO-OFDM offrent une capacité plus élevée et une meilleure portée que leurs homologues Nyquist. Finalement, nous comparons notre solution BOSS avec la topologie Clos replié utilisée aujourd'hui. Nous montrons que notre architecture BOSS nécessite 400 fois moins de transpondeurs et de câbles que les réseaux de commutation électronique d'aujourd'hui, ce qui ouvre la voie à des centres de données hautement évolutifs et durables / Data centers are becoming increasingly important and ubiquitous, ranging from large server farms dedicated to various tasks such as data processing, computing, data storage or the combination thereof, to small distributed server farms. The spread of cloud services is driving a relentless increase of traffic demand in datacenters, which is doubling every 12 to 15 months. Along this thesis we study the evolution of data center networks and present short- and long-term solutions for their physical intra-connection. Today, rapidly-growing traffic in data centers spotlights the urgent need for high-speed low-cost interfaces capable to cope with hungry-bandwidth demanding new applications. Thereby, in the short-term we propose novel high-datarate low-cost optical transceivers enabling up to 200 Gb/s transmission using intensity-modulation and direct-detection schemes. Several advanced pulse amplitude modulation schemes are explored while increasing speeds towards record symbol-rates, as high as 100 GBd. High-speed electrical signaling is enabled by an integrated selector-power digital-to- analog converter, capable of doubling input baud-rates while outputting advance multi-level pulse amplitude modulations. Notwithstanding, data centers’ global traffic will continue increasing incessantly. Current datacenters rely on high-radix all-electronic Ethernet switches to build an interconnecting network capable to pave with such vast amount of traffic. In such architecture, traffic growth directly relates to an increase of networking components, including switches with higher port-count, interfaces and cables. Unsustainable cost and energy consumption that can be expected in the future calls for a network reassessment. Therefore, we subsequently present a novel concept for intra-datacenter networks called burst optical slot switching (BOSS); in which servers are connected via BOSS nodes through wavelength- and time-division multiplexed fiber rings organized in a Torus topology. Along this thesis we investigate on the implementation of BOSS nodes; in particular, the switching fabric and the optical transceivers. The main element within the switching fabric is the slot blocker, which is capable of erasing any packet of any wavelength in a nanosecond time-scale. On the one hand, we explore the use of semiconductor optical amplifiers as means of gating element to be used within the slot blocker and study their cascadability. On the other hand we develop a monolithically integrated slot blocker capable of handling up to sixteen wavelength channels with dual-polarization diversity. Then we present several transceiver architectures and study their performances. Transceivers’ signaling needs to fulfill two main requirements: packet-mode operation, i.e. being capable of recovering few microsecond –long bursts; and resiliency to tight filtering, which occurs when cascading many nodes (e.g. up to 100). First we build packet-mode Nyquist-pulse-shaped N-QAM transceivers, which adapt the modulation format as a function of the number of nodes to traverse. Later we propose the use of coherent-optical orthogonal frequency division multiplexing (CO-OFDM). With inherent packet structure and high spectral tailoring capabilities, we demonstrate that CO-OFDM-based transceivers offer higher capacity and enhanced reach than its Nyquist counterpart. Finally, we compare our BOSS solution to today’s Folded Clos topology, and show that our BOSS architecture requires x400 fewer transponders and cables than today’s electronic switching networks, which paves the way to highly scalable and sustainable datacenters
|
Page generated in 0.0765 seconds