• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 236
  • 35
  • 31
  • 31
  • 17
  • 14
  • 11
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 443
  • 63
  • 59
  • 56
  • 54
  • 53
  • 52
  • 49
  • 46
  • 45
  • 45
  • 44
  • 42
  • 41
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Performances of LTE networks / Performances des Réseaux LTE

Iturralde Ruiz, Geovanny Mauricio 02 October 2012 (has links)
Poussé par la demande croissante de services à haut débit sans fil, Long Term Evolution (LTE) a émergé comme une solution prometteuse pour les communications mobiles. Dans plusieurs pays à travers le monde, la mise en oeuvre de LTE est en train de se développer. LTE offre une architecture tout-IP qui fournit des débits élevés et permet une prise en charge efficace des applications de type multimédia. LTE est spécifié par le 3GPP ; cette technologie fournit une architecture capable de mettre en place des mécanismes pour traiter des classes de trafic hétérogènes comme la voix, la vidéo, les transferts de fichier, les courriers électroniques, etc. Ces classes de flux hétérogènes peuvent être gérées en fonction de la qualité de service requise mais aussi de la qualité des canaux et des conditions environnementales qui peuvent varier considérablement sur une courte échelle de temps. Les standards du 3GPP ne spécifient pas l’algorithmique de l’allocation des ressources du réseau d’accès, dont l’importance est grande pour garantir performance et qualité de service (QoS). Dans cette thèse, nous nous focalisons plus spécifiquement sur la QoS de LTE sur la voie descendante. Nous nous concentrons alors sur la gestion des ressources et l’ordonnancement sur l’interface radio des réseaux d’accès. Dans une première partie, nous nous sommes intéressés à des contextes de macro-cellules. Le premier mécanisme proposé pour l’allocation des ressources combine une méthode de jetons virtuels et des ordonnanceurs opportunistes. Les performances obtenues sont très bonnes mais n’assurent pas une très bonne équité. Notre seconde proposition repose sur la théorie des jeux, et plus spécifiquement sur la valeur de Shapley, pour atteindre un haut niveau d’équité entre les différentes classes de services au détriment de la qualité de service. Cela nous a poussé, dans un troisième mécanisme, à combiner les deux schémas. La deuxième partie de la thèse est consacrée aux femto-cellules (ou femtocells) qui offrent des compléments de couverture appréciables. La difficulté consiste alors à étudier et à minimiser les interférences. Notre premier mécanisme d’atténuation des interférences est fondé sur le contrôle de la puissance de transmission. Il fonctionne en utilisant la théorie des jeux non coopératifs. On effectue une négociation constante entre le débit et les interférences pour trouver un niveau optimal de puissance d’émission. Le second mécanisme est centralisé et utilise une approche de division de la bande passante afin d’obliger les femtocells à ne pas utiliser les mêmes sous-bandes évitant ainsi les interférences. Le partage de bande passante et l’allocation sont effectués en utilisant sur la théorie des jeux (valeur de Shapley) et en tenant compte du type d’application. Ce schéma réduit les interférences considérablement. Tous les mécanismes proposés ont été testés et évalués dans un environnement de simulation en utilisant l’outil LTE-Sim au développement duquel nous avons contribué. / Driven by the growing demand for high-speed broadband wireless services, Long term Evolution (LTE) technology has emerged as a competitive alternative to mobile communications solution. In several countries around the world, the implementation of LTE has started. LTE offers an IP-based framework that provides high data rates for multimedia applications. Moreover, based on the 3GPP specifications, the technology provides a set of built in mechanisms to support heterogeneous classes of traffic including data, voice and video, etc. Supporting heterogeneous classes of services means that the traffic is highly diverse and has distinct QoS parameters, channel and environmental conditions may vary dramatically on a short time scale. The 3GPP specifications leave unstandardized the resource management and scheduling mechanisms which are crucial components to guarantee the QoS performance for the services. In this thesis, we evaluate the performance and QoS in LTE technology. Moreover, our research addresses the resource management and scheduling issues on the wireless interface. In fact, after surveying, classifying and comparing different scheduling mechanisms, we propose three QoS mechanisms for resource allocation in macrocell scenarios focused on real time services and two mechanisms for interference mitigation in femtocell scenarios taking into account the QoS of real time services. Our first proposed mechanism for resource allocation in macrocell scenarios combines the well known virtual token (or token buckets) method with opportunistic schedulers, our second scheme utilizes game theory, specifically the Shapley value in order to achieve a higher fairness level among classes of services and our third mechanism combines the first and the second proposed schemes. Our first mechanism for interference mitigation in femtocell scenarios is power control based and works by using non cooperative games. It performs a constant bargain between throughput and SINR to find out the optimal transmit power level. The second mechanism is centralised, it uses a bandwidth division approach in order to not use the same subbands to avoid interference. The bandwidth division and assignation is performed based on game theory (Shapley value) taking into account the application bitrate . This scheme reduces interference considerably and shows an improvement compared to other bandwidth division schemes. All proposed mechanism are performed in a LTE simulation environment. several constraints such as throughput, Packet Loss Ratio, delay, fairness index, SINR are used to evaluate the efficiency of our schemes
122

LTE-A D2D傳輸在動態頻率重用下之頻譜分配 / Spectrum allocation of LTE-A D2D transmissions using dynamic frequency reuse

許華元, Hsu, Hwa Yuan Unknown Date (has links)
LTE-A在靜態頻率重用的情形下,雖能有效減少干擾,但在UE (User Equipment)較為集中情形下,會有頻譜使用不足的情況。在傳統的傳輸模式中,若UE之間欲進行傳輸,通常需由傳輸端發送訊號給基地台,基地台再發送訊號給接收者,需要進行兩次的無線傳輸。若UE在彼此距離相近的環境中,D2D (Device-to-Device)傳輸可讓UE之間直接利用LTE-A頻譜進行傳輸,進而達到節省頻譜資源的效果。 本研究探討靜態頻率重用的缺點與位置相近的D2D傳輸模式,提出DFRDD (Dynamic Frequency Reuse for D2D transmission)方法,使用動態頻率重用與D2D傳輸技術。我們將一個細胞(cell)劃分為中央區域及外圍區域,外圍區域又劃分為三個扇形區,使用動態頻率重用的方法調整頻譜,在頻譜不足時,中央區域可使用外圍區域的頻譜,外圍區域最多可使用中央區域三分之一的頻譜。在使用D2D技術時,利用D2D UE與BS/RS (Base Station/Relay Station)的距離,計算出對基地台UE干擾較少的頻譜,進而提升傳輸效率。 實驗結果顯示,DFRDD利用動態頻率重用與D2D選擇頻譜的方法、在吞吐量方面表現得較H. S.Chae [18]、Bao [19]、Zhang [20]所提出的方法為佳。 / In the case of LTE-A static frequency reuse, although it can effectively reduce the interference, however, in the case of more dense UEs (User Equipment) environment, there will be problem of insufficient spectrum. In the traditional transmission method, if a pair of UEs want to communicate with each other, the transmitter sends a signal to the base station, the base station then sends a signal to the receiver, the signal need to to be wirelessly transmitted twice. If a pair of UEs are within a close distance, D2D (device-to-device) transmission allows users to communicate with each other directly using the same LTE-A spectrum to save spectrum resource. In order to improve LTE-A system performance, this paper proposes a DFRDD (Dynamic Frequency Reuse for D2D transmission) method. Using dynamic frequency reuse and D2D transmission, we divide a cell into center region and outer area, where the outer area is divided into three sectors. We use dynamic frequency reuse method to allocate spectrum. When the spectrum is insufficient, the center region can use the spectrum of the outer region. On the other hand, the outer area can use up to one third of the spectrum of the center region. When using D2D technic, we calculate the distance between D2D UE and the BS / RS (Base Station / Relay Station), choose the frequency that may reduce the interference of cellular UE and improve transmission efficiency. The experimental results show that DFRDD uses the method of dynamic frequency reuse and D2D to select the spectrum, which has better performance than those methods proposed by Chae [18], Bao [19] and Zhang [20].
123

[en] EVALUATION OF INTRASYSTEM INTERFERENCE IN 4G LTE NETWORKS AND BETWEEN DIGITAL TV AND LTE – SIMULATIONS AND FIELD MEASUREMENTS / [pt] AVALIAÇÃO DAS INTERFERÊNCIAS INTRA-SISTEMA EM REDES 4G LTE E ENTRE TV DIGITAL E LTE – SIMULAÇÃO E MEDIDAS EM CAMPO

JUSSIF JUNIOR ABULARACH ARNEZ 23 March 2015 (has links)
[pt] Nesta dissertação é investigada, por meio de simulações computacionais, a utilização do conceito de rádio cognitivo considerando a técnica de sensoriamento espectral aplicada às femtocélulas do sistema móvel LTE Release 10 para reduzir os problemas de interferência entre camadas (cross-tier) existentes em um cenário de coexistências das redes heterogêneas (femtocélulas e macrocélulas). Além disso, é investigada a interferência gerada por parte das femtocélulas LTE Release 10 em receptores de TV Digital operando em bandas de frequência adjacentes. Neste caso, além da simulação computacional foram realizadas medições em cenários de coexistência da femtocélula LTE e do Sistema Brasileiro de TV Digital na banda de frequência de 700 MHz. / [en] This dissertation investigates, using computer simulation, the use of spectrum sensing Cognitive Radio concept applied in femtocells of the LTE Release 10 mobile system in order to reduce the interference cross-tier problems that exists in the coexistence scenario of a heterogeneous network (femto-macrocells). Furthermore, the interference produced by LTE Release 10 femtocells in TV Digital receivers operating in adjacent frequency bands was investigated. In this case, besides the computer simulations measurements were performed in an experimental setup implementing coexistence scenarios of the LTE femtocell and the Brazilian Digital TV System at the 700 MHz frequency band.
124

Cooperation strategies for inter-cell interference mitigation in OFDMA systems / Les stratégies de coopération inter-cellules pour l'atténuation des interférences dans les systèmes OFDMA

Kurda, Reben 18 March 2015 (has links)
Récemment, l'utilisation des réseaux cellulaires a radicalement changé avec l’émergence de la quatrième génération (4G) de systèmes de télécommunications mobiles LTE/LTE-A (Long Term Evolution-Advanced). Les réseaux de générations précédentes (3G), initialement conçus pour le transport de la voix et les données à faible et moyen débits, ont du mal à faire face à l’augmentation accrue du trafic de données multimédia tout en répondant à leurs fortes exigences et contraintes en termes de qualité de service (QdS). Pour mieux répondre à ces besoins, les réseaux 4G ont introduit le paradigme des Réseaux Hétérogènes (HetNet).Les réseaux HetNet introduisent une nouvelle notion d’hétérogénéité pour les réseaux cellulaires en introduisant le concept des smalls cells (petites cellules) qui met en place des antennes à faible puissance d’émission. Ainsi, le réseau est composé de plusieurs couches (tiers) qui se chevauchent incluant la couverture traditionnelle macro-cellulaire, les pico-cellules, les femto-cellules, et les relais. Outre les améliorations des couvertures radio en environnements intérieurs, les smalls cells permettent d’augmenter la capacité du système par une meilleure utilisation du spectre et en rapprochant l’utilisateur de son point d’accès au réseau. Une des conséquences directes de cette densification cellulaire est l’interférence générée entre les différentes cellules des diverses couches quand ces dernières réutilisent les mêmes fréquences. Aussi, la définition de solutions efficaces de gestion des interférences dans ce type de systèmes constitue un de leurs défis majeurs. Cette thèse s’intéresse au problème de gestion des interférences dans les systèmes hétérogènes LTE-A. Notre objectif est d’apporter des solutions efficaces et originales au problème d’interférence dans ce contexte via des mécanismes d’ajustement de puissance des petites cellules. Nous avons pour cela distingués deux cas d’étude à savoir un déploiement à deux couches macro-femtocellules et macro-picocellules. Dans la première partie dédiée à un déploiement femtocellule et macrocellule, nous concevons une stratégie d'ajustement de puissance des femtocellules assisté par la macrocellule et qui prend en compte les performances des utilisateurs des femtocells tout en atténuant l'interférence causée aux utilisateurs des macrocellules sur leurs liens montants. Cette solution offre l’avantage de la prise en compte de paramètres contextuels locaux aux femtocellules (tels que le nombre d’utilisateurs en situation de outage) tout en considérant des scénarios de mobilité réalistes. Nous avons montré par simulation que les interférences sur les utilisateurs des macrocellules sont sensiblement réduites et que les femtocellules sont en mesure de dynamiquement ajuster leur puissance d'émission pour atteindre les objectifs fixés en termes d’équilibre entre performance des utilisateurs des macrocellules et celle de leurs propres utilisateurs. Dans la seconde partie de la thèse, nous considérons le déploiement de picocellules sous l'égide de la macrocellule. Nous nous sommes intéressés ici aux solutions d’extension de l’aire picocellulaire qui permettent une meilleure association utilisateur/cellule permettant de réduire l’interférence mais aussi offrir une meilleure efficacité spectrale. Nous proposons donc une approche basée sur un modèle de prédiction de la mobilité des utilisateurs qui permet de mieux ajuster la proportion de bande passante à partager entre la macrocellule et la picocellule en fonction de la durée de séjour estimée de ces utilisateurs ainsi que de leur demandes en bande passante. Notre solution a permis d’offrir un bon compromis entre les débits réalisables de la Macro et des picocellules. / Recently the use of modern cellular networks has drastically changed with the emerging Long Term Evolution Advanced (LTE-A) technology. Homogeneous networks which were initially designed for voice-centric and low data rates face unprecedented challenges for meeting the increasing traffic demands of high data-driven applications and their important quality of service requirements. Therefore, these networks are moving towards the so called Heterogeneous Networks (HetNets). HetNets represent a new paradigm for cellular networks as their nodes have different characteristics such as transmission power and radio frequency coverage area. Consequently, a HetNet shows completely different interference characteristics compared to homogeneous deployment and attention must be paid to these disparities when different tiers are collocated together. This is mostly due to the potential spectrum frequency reuse by the involved tiers in the HetNets. Hence, efficient inter-cell interference mitigation solutions in co-channel deployments of HetNets remain a challenge for both industry and academic researchers. This thesis focuses on LTE-A HetNet systems which are based on Orthogonal Frequency Division Multiplexing Access (OFDMA) modulation. Our aim is to investigate the aggressive interference issue that appears when different types of base stations are jointly deployed together and especially in two cases, namely Macro-Femtocells and Macro-Picocells co-existence. We propose new practical power adjustment solutions for managing inter-cell interference dynamically for both cases. In the first part dedicated to Femtocells and Macrocell coexistence, we design a MBS-assisted femtocell power adjustment strategy which takes into account femtocells users performance while mitigating the inter-cell interference on victim macrocell users. Further, we propose a new cooperative and context-aware interference mitigation method which is derived for realistic scenarios involving mobility of users and their varying locations. We proved numerically that the Femtocells are able to maintain their interference under a desirable threshold by adjusting their transmission power. Our strategies provide an efficient means for achieving the desired level of macrocell/femtocell throughput trade-off. In the second part of the studies where Picocells are deployed under the umbrella of the Macrocell, we paid a special attention and efforts to the interference management in the situation where Picocells are configured to set up a cell range expansion. We suggest a MBS-assisted collaborative scheme powered by an analytical model to predict the mobility of Macrocell users passing through the cell range expansion area of the picocell. Our goal is to adapt the muting ratio ruling the frequency resource partitioning between both tiers according to the mobility behavior of the range-expanded users, thereby providing an efficient trade-off between Macrocell and Picocell achievable throughputs.
125

Lightweight Security Solutions for LTE/LTE-A Networks / Solutions de Sécurité Légers pour les Réseaux LTE/LTE-A

Hussein, Soran 08 December 2014 (has links)
Récemment, le 3GPP (3rd Generation Partnership Project) a standardisé les systèmes LTE/LTE-A (Long Term Evolution/LTE-Advanced) qui ont été approuvés par l'UIT (Union Internationale des Télécommunications) comme des réseaux de télécommunications mobiles de 4éme génération. La sécurité est l'une des questions essentielles qui doivent être traitées avec soin pour protéger les informations de l'opérateur et des utilisateurs. Aussi, le 3GPP a normalisé plusieurs algorithmes et protocoles afin de sécuriser les communications entre les différentes entités du réseau. Cependant, l'augmentation du niveau de sécurité dans ces systèmes ne devrait pas leur imposer des contraintes lourdes telles qu’une grande complexité de calcul ou encore une forte consommation d'énergie. En effet, l'efficacité énergétique est devenue récemment un besoin critique pour les opérateurs afin de réduire l’empreinte écologique et les coûts opérationnels de ces systèmes. Les services de sécurité dans les réseaux mobiles tels que l'authentification, la confidentialité et l'intégrité des données sont le plus souvent effectués en utilisant des techniques cryptographiques. Toutefois, la plupart des solutions standardisées déjà adoptées par le 3GPP dépendent des algorithmes de chiffrement qui possèdent une grande complexité, induisant une consommation énergétique plus élevée dans les différentes entités communicantes du réseau. La confidentialité des données, qui se réfère principalement au fait de s'assurer que l'information n'est accessible qu'à ceux dont l'accès est autorisé, est réalisée au niveau de la sous-couche PDCP (Packet Data Convergence Protocol) de la pile protocolaire de LTE/LTE-A par l'un des trois algorithmes normalisés (EEA1, EEA2 et EEA3). Or, chacun des trois algorithmes exige une forte complexité de calcul car ils reposent sur la théorie de chiffrement de Shannon qui utilise les fonctions de confusion et de diffusion sur plusieurs itérations. Dans cette thèse, nous proposons un nouvel algorithme de confidentialité en utilisant le concept de substitution et de diffusion dans lequel le niveau de sécurité requis est atteint en un seul tour. Par conséquent, la complexité de calcul est considérablement réduite ce qui entraîne une réduction de la consommation d'énergie par les fonctions de chiffrement et de déchiffrement. De plus, la même approche est utilisée pour réduire la complexité des algorithmes 3GPP d'intégrité des données (EIA1, EIA2 et EIA3) dont le concept de chiffrement repose sur les mêmes fonctions complexes. Enfin, nous étudions dans cette thèse le problème d'authentification dans le contexte du paradigme D2D (Device to Device communications) introduit dans les systèmes 4G. Le concept D2D se réfère à la communication directe entre deux terminaux mobiles sans passer par le cœur du réseau. Il constitue un moyen prometteur pour améliorer les performances et réduire la consommation d'énergie dans les réseaux LTE/LTE-A. Toutefois, l'authentification et la dérivation de clé entre deux terminaux mobiles dans le contexte D2D n’ont pas fait l’objet d’études. Aussi, nous proposons un nouveau protocole léger d’authentification et de dérivation de clé permettant d’authentifier les terminaux D2D et de dériver les clés nécessaires à la fois pour le cryptage et pour la protection de l'intégrité des données. / Recently, the 3rd Group Project Partnership (3GPP) has developed Long Term Evolution/ Long Term Evolution-Advanced (LTE/LTE-A) systems which have been approved by the International Telecommunication Union (ITU) as 4th Generation (4G) mobile telecommunication networks. Security is one of critical issues which should be handled carefully to protect user's and mobile operator's information. Thus, the 3GPP has standardized algorithms and protocols in order to secure the communications between different entities of the mobile network. However, increasing the security level in such networks should not compel heavy constrains on these networks such as complexity and energy. Indeed, energy efficiency has become recently a critical need for mobile network operators for reduced carbon emissions and operational costs. The security services in mobile networks such as authentication, data confidentiality and data integrity are mostly performed using cryptographic techniques.However, most of the standardized solutions already adopted by the3GPP depend on encryption algorithms which possess high computational complexity which in turn contributes in consuming further energy at the different network communication parties.Data confidentiality which mainly refers to the protection of the user’s information privacy is achieved at the Packet Data Convergence Protocol (PDCP) sub-layer in the LTE/LTE-A protocol stack by one of the three standardized algorithms (EEA1, EEA2 and EEA3). However, each of the three algorithms requires high computational complexity since they rely on Shannon’s theory of encryption algorithms by applying confusion and diffusion for several rounds. In our thesis we propose a novel confidentiality algorithm using the concept of substitution and diffusion in which the required security level is attained in only one round. Consequently the computational complexity is considerably reduced which in return results in reducing the energy consumption during both encryption and decryption procedures. Similarly, the same approach is used to reduce the complexity of 3GPP data integrity algorithms (EIA1, EIA2 and EIA3) which the core cipher rely on the same complex functions. Finally, we investigate in this thesis the authentication issue in Device to Device paradigms proposal in 4G systems. Device to Device communications refer to direct communications between two mobile devices without passing through the core network. They constitute a promising mean to increase the performance and reduce energy consumptions in LTE/LTE-A networks. In such context, the authentication and key derivation between two mobile devices have not been well investigated. Thus, a novel lightweight authentication and key derivation protocol is proposed to authenticate two communicating devices during session establishments as well as deriving necessary keys for both data encryption and integrity protection.
126

[en] COVERAGE AND CAPACITY PLANNING FOR LTE-ADVANCED NETWORKS / [pt] PLANEJAMENTO DE COBERTURA E CAPACIDADE DE REDES LTE-ADVANCED

DANIEL YUJI MITSUTAKE CUETO 09 November 2018 (has links)
[pt] O acesso sem fio de banda larga está se tornando uma realidade, não apenas para uso corporativo e doméstico, mas para usuários com mobilidade. Dos estimados 1.8 bilhão de pessoas que já utilizam banda larga desde 2012, cerca de dois terços serão consumidores de banda larga móvel e a maioria será servida por redes HSPA (High Speed Packet Access) e LTE-A (Long Term Evolution-Advanced), uma evolução para as redes 4G,capaz de oferecer velocidades acima de 500 Mbps. O planejamento celular tem como objetivo estabelecer a rede de rádio adequada e eficiente em termos de cobertura, capacidade, qualidade de serviço (QoS), custo, utilização de frequências, a implantação de equipamentos e desempenho. O objetivo deste trabalho é estudar dos métodos de planejamento de cobertura e capacidade de sistemas celulares LTE-Advanced e propor uma metodologia passo-a-passo para o planejamento inicial e dimensionamento do número de estações rádio base necessárias para atender uma determinada área de serviço com a capacidade requerida. Um estudo de caso é apresentado, ilustrando a aplicação da metodologia proposta. É apresentada também uma análise comparativa dos recursos requeridos para atender às especificações do projeto quando são utilizadas as bandas de frequência de 2.6 GHz, atualmente autorizada no Brasil, e a banda de 700 MHz, que está em consideração para uso futuro. Os resultados quantificam claramente as vantagens do uso da banda de 700 MHz em relação à banda de 2.6 GHz. / [en] The wireless broadband is becoming a reality, not only for corporate and domestic use, but for mobile users. Of the estimated 1.8 billion people already using broadband since 2012, about two thirds will be mobile broadband consumers and the majority will be served by HSPA (High Speed Packet Access) and LTE-A (Long Term Evolution-Advanced), an evolution to 4G networks, able to offer speeds up to 500 Mbps. The cell planning establishes a radio network properly and efficiently in terms of coverage, capacity, quality of service (QoS), cost, frequency usage, deployment of equipment and performance. The objective of this work is to study the methods of planning coverage and capacity of LTE-A cellular systems and propose a methodology step-by-step for the initial planning and dimensioning of the number of base stations required to meet a given service area with the required capacity. A study case is presented, illustrating the application of the proposed methodology. It also presented a comparative analysis of the required resources to meet the project specifications when used frequency bands of 2.6 GHz, nowadays licensed in Brazil, and the 700 MHz band, which is under consideration for future use. The results clearly quantify the advantages of the use of the 700 MHz band relative to the 2.6 GHz band.
127

Análise de desempenho do protocolo TCP em Redes LTE. / Performance evaluation of TCP protocol in LTE Networks.

Carlos Alberto Leite Bello Filho 26 February 2014 (has links)
O crescimento dos serviços de banda-larga em redes de comunicações móveis tem provocado uma demanda por dados cada vez mais rápidos e de qualidade. A tecnologia de redes móveis chamada LTE (Long Term Evolution) ou quarta geração (4G) surgiu com o objetivo de atender esta demanda por acesso sem fio a serviços, como acesso à Internet, jogos online, VoIP e vídeo conferência. O LTE faz parte das especificações do 3GPP releases 8 e 9, operando numa rede totalmente IP, provendo taxas de transmissão superiores a 100 Mbps (DL), 50 Mbps (UL), baixa latência (10 ms) e compatibilidade com as versões anteriores de redes móveis, 2G (GSM/EDGE) e 3G (UMTS/HSPA). O protocolo TCP desenvolvido para operar em redes cabeadas, apresenta baixo desempenho sobre canais sem fio, como redes móveis celulares, devido principalmente às características de desvanecimento seletivo, sombreamento e às altas taxas de erros provenientes da interface aérea. Como todas as perdas são interpretadas como causadas por congestionamento, o desempenho do protocolo é ruim. O objetivo desta dissertação é avaliar o desempenho de vários tipos de protocolo TCP através de simulações, sob a influência de interferência nos canais entre o terminal móvel (UE User Equipment) e um servidor remoto. Para isto utilizou-se o software NS3 (Network Simulator versão 3) e os protocolos TCP Westwood Plus, New Reno, Reno e Tahoe. Os resultados obtidos nos testes mostram que o protocolo TCP Westwood Plus possui um desempenho melhor que os outros. Os protocolos TCP New Reno e Reno tiveram desempenho muito semelhante devido ao modelo de interferência utilizada ter uma distribuição uniforme e, com isso, a possibilidade de perdas de bits consecutivos é baixa em uma mesma janela de transmissão. O TCP Tahoe, como era de se esperar, apresentou o pior desempenho dentre todos, pois o mesmo não possui o mecanismo de fast recovery e sua janela de congestionamento volta sempre para um segmento após o timeout. Observou-se ainda que o atraso tem grande importância no desempenho dos protocolos TCP, mas até do que a largura de banda dos links de acesso e de backbone, uma vez que, no cenário testado, o gargalo estava presente na interface aérea. As simulações com erros na interface aérea, introduzido com o script de fading (desvanecimento) do NS3, mostraram que o modo RLC AM (com reconhecimento) tem um desempenho melhor para aplicações de transferência de arquivos em ambientes ruidosos do que o modo RLC UM sem reconhecimento. / The growth of broadband services in mobile networks has led to a demand for data with faster and better quality transmissions. The mobile network technology called LTE (Long Term Evolution) or fourth generation (4G) came up with the objective of attending this demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. LTE is part of the specifications of 3GPP Releases 8 and 9 operating in all-IP networks and providing transmission rates above 100 Mbps (DL), 50 Mbps (UL), low latency (10 ms) and compatibility with previous versions of mobile networks, 2G (GSM / EDGE) and 3G (UMTS / HSPA). The TCP protocol designed to operate in wired networks presents poor performance over wireless channels such as mobile cellular networks, due mainly to the characteristics of selective fading, shadowing and high error rates coming from the air interface. As all losses are interpreted as caused by congestion the protocol performance is bad. The objective of this dissertation is to evaluate the performance of several types of the TCP protocols through simulations, under the influence of channel interference between the mobile terminal (UE - User Equipment) and a remote server. For this, the NS3 (Network Simulator version 3) software and the protocols TCP Westwood Plus, New Reno, Reno and Tahoe were used. Results have shown that the TCP Westwood Plus protocol has a better performance than others. The New Reno and Reno TCP protocols had similar performance due to the proposed interference model, which has a uniform distribution and so the possibility of loss of consecutive bits is low on the same transmission window. TCP Tahoe, as expected has shown the worst performance among all because it does not have the fast recovery mechanism and its congestion window keeps coming back to one segment after a timeout. It was also observed that the delay has a greater importance in the performance of TCP when comparing with the bandwidth of the access and backbone links importance, once in the tested scenario the bottleneck was present in the air interface. The simulation performed with noise in the Air Interface, introduced by the NS3 fading script, showed that the RLC AM (acknowledged mode) had a better performance than the RLM UM (Unacknowledged mode).
128

Análise de desempenho do protocolo TCP em Redes LTE. / Performance evaluation of TCP protocol in LTE Networks.

Carlos Alberto Leite Bello Filho 26 February 2014 (has links)
O crescimento dos serviços de banda-larga em redes de comunicações móveis tem provocado uma demanda por dados cada vez mais rápidos e de qualidade. A tecnologia de redes móveis chamada LTE (Long Term Evolution) ou quarta geração (4G) surgiu com o objetivo de atender esta demanda por acesso sem fio a serviços, como acesso à Internet, jogos online, VoIP e vídeo conferência. O LTE faz parte das especificações do 3GPP releases 8 e 9, operando numa rede totalmente IP, provendo taxas de transmissão superiores a 100 Mbps (DL), 50 Mbps (UL), baixa latência (10 ms) e compatibilidade com as versões anteriores de redes móveis, 2G (GSM/EDGE) e 3G (UMTS/HSPA). O protocolo TCP desenvolvido para operar em redes cabeadas, apresenta baixo desempenho sobre canais sem fio, como redes móveis celulares, devido principalmente às características de desvanecimento seletivo, sombreamento e às altas taxas de erros provenientes da interface aérea. Como todas as perdas são interpretadas como causadas por congestionamento, o desempenho do protocolo é ruim. O objetivo desta dissertação é avaliar o desempenho de vários tipos de protocolo TCP através de simulações, sob a influência de interferência nos canais entre o terminal móvel (UE User Equipment) e um servidor remoto. Para isto utilizou-se o software NS3 (Network Simulator versão 3) e os protocolos TCP Westwood Plus, New Reno, Reno e Tahoe. Os resultados obtidos nos testes mostram que o protocolo TCP Westwood Plus possui um desempenho melhor que os outros. Os protocolos TCP New Reno e Reno tiveram desempenho muito semelhante devido ao modelo de interferência utilizada ter uma distribuição uniforme e, com isso, a possibilidade de perdas de bits consecutivos é baixa em uma mesma janela de transmissão. O TCP Tahoe, como era de se esperar, apresentou o pior desempenho dentre todos, pois o mesmo não possui o mecanismo de fast recovery e sua janela de congestionamento volta sempre para um segmento após o timeout. Observou-se ainda que o atraso tem grande importância no desempenho dos protocolos TCP, mas até do que a largura de banda dos links de acesso e de backbone, uma vez que, no cenário testado, o gargalo estava presente na interface aérea. As simulações com erros na interface aérea, introduzido com o script de fading (desvanecimento) do NS3, mostraram que o modo RLC AM (com reconhecimento) tem um desempenho melhor para aplicações de transferência de arquivos em ambientes ruidosos do que o modo RLC UM sem reconhecimento. / The growth of broadband services in mobile networks has led to a demand for data with faster and better quality transmissions. The mobile network technology called LTE (Long Term Evolution) or fourth generation (4G) came up with the objective of attending this demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. LTE is part of the specifications of 3GPP Releases 8 and 9 operating in all-IP networks and providing transmission rates above 100 Mbps (DL), 50 Mbps (UL), low latency (10 ms) and compatibility with previous versions of mobile networks, 2G (GSM / EDGE) and 3G (UMTS / HSPA). The TCP protocol designed to operate in wired networks presents poor performance over wireless channels such as mobile cellular networks, due mainly to the characteristics of selective fading, shadowing and high error rates coming from the air interface. As all losses are interpreted as caused by congestion the protocol performance is bad. The objective of this dissertation is to evaluate the performance of several types of the TCP protocols through simulations, under the influence of channel interference between the mobile terminal (UE - User Equipment) and a remote server. For this, the NS3 (Network Simulator version 3) software and the protocols TCP Westwood Plus, New Reno, Reno and Tahoe were used. Results have shown that the TCP Westwood Plus protocol has a better performance than others. The New Reno and Reno TCP protocols had similar performance due to the proposed interference model, which has a uniform distribution and so the possibility of loss of consecutive bits is low on the same transmission window. TCP Tahoe, as expected has shown the worst performance among all because it does not have the fast recovery mechanism and its congestion window keeps coming back to one segment after a timeout. It was also observed that the delay has a greater importance in the performance of TCP when comparing with the bandwidth of the access and backbone links importance, once in the tested scenario the bottleneck was present in the air interface. The simulation performed with noise in the Air Interface, introduced by the NS3 fading script, showed that the RLC AM (acknowledged mode) had a better performance than the RLM UM (Unacknowledged mode).
129

Implementa??o de processador banda base ofdma para downlink lte em fpga

Silva, Bruno Leonardo Mendes Tavares 31 March 2011 (has links)
Made available in DSpace on 2014-12-17T14:55:50Z (GMT). No. of bitstreams: 1 BrunoLMTS_DISSERT.pdf: 3836374 bytes, checksum: 430e05d393bcb665a7880036b61844c2 (MD5) Previous issue date: 2011-03-31 / This work treats of an implementation OFDMA baseband processor in hardware for LTE Downlink. The LTE or Long Term Evolution consist the last stage of development of the technology called 3G (Mobile System Third Generation) which offers an increasing in data rate and more efficiency and flexibility in transmission with application of advanced antennas and multiple carriers techniques. This technology applies in your physical layer the OFDMA technical (Orthogonal Frequency Division Multiple Access) for generation of signals and mapping of physical resources in downlink and has as base theoretical to OFDM multiple carriers technique (Orthogonal Frequency Division Multiplexing). With recent completion of LTE specifications, different hardware solutions have been developed, mainly, to the level symbol processing where the implementation of OFDMA processor in base band is commonly considered, because it is also considered a basic architecture of others important applications. For implementation of processor, the reconfigurable hardware offered by devices as FPGA are considered which shares not only to meet the high requirements of flexibility and adaptability of LTE as well as offers possibility of an implementation quick and efficient. The implementation of processor in reconfigurable hardware meets the specifications of LTE physical layer as well as have the flexibility necessary for to meet others standards and application which use OFDMA processor as basic architecture for your systems. The results obtained through of simulation and verification functional system approval the functionality and flexibility of processor implemented / Esta disserta??o trata da implementa??o de um processador banda base em hardware para Downlink LTE. O LTE ou Long Term Evolution compreende o ?ltimo est?gio de desenvolvimento das tecnologias chamadas de 3G (Telefonia M?vel de Terceira Gera??o) que prov? um incremento nas taxas de dados e maior efici?ncia e flexibilidade na transmiss?o com emprego de t?cnicas avan?adas de antenas e de t?cnicas de transmiss?o de m?ltiplas portadoras. Esta tecnologia aplica em sua camada f?sica a t?cnica OFDMA (Orthogonal F requency Division Multiple Access) para gera??o de sinais e mapeamento dos recursos f?sicos no downlink e tem como base te?rica ? t?cnica de m?ltiplas portadoras OFDM (Orthogonal Frequency Division Multiplexing). Com recente finaliza??o das especifica??es da tecnologia LTE, diversas solu??es em hardware tem sido propostas e desenvolvidas, principalmente, ao n?vel de processamento de s?mbolo em que a implementa??o do processador OFDMA em banda base ? comumente considerada, visto que ela ? tamb?m considerada como arquitetura b?sica de outras importantes aplica??es. Para implementa??o do processador, hardwares reconfigur?veis oferecidos por dispositivos como FPGA s?o considerados que visa n?o s? atender os altos requisitos de flexibilidade e adaptabilidade do LTE como tamb?m oferecem a possibilidade de uma implementa??o r?pida e eficiente. A implementa??o do processador em hardware reconfigur?vel atendeu as especifica??es da camada f?sica LTE bem como se mostrou flex?vel o suficiente para atender outros padr?es e aplica??es que utilizem o processador OFDMA como arquitetura b?sica de seus sistemas. Os resultados obtidos atrav?s de simula??o e verifica??o funcional do sistema atestam a funcionalidade e a flexibilidade do processador implementado
130

Rede de acesso virtualizada: alocação e posicionamento de recursos / Virtualized radio access networks: centralization, allocation, and positioning of resources

Souza, Phelipe Alves de 05 October 2018 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-05T14:23:30Z No. of bitstreams: 2 Dissertação - Phelipe Alves de Souza - 2018.pdf: 2593287 bytes, checksum: 75272a4ac609ad844ce539216911cb72 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-05T14:30:42Z (GMT) No. of bitstreams: 2 Dissertação - Phelipe Alves de Souza - 2018.pdf: 2593287 bytes, checksum: 75272a4ac609ad844ce539216911cb72 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-11-05T14:30:42Z (GMT). No. of bitstreams: 2 Dissertação - Phelipe Alves de Souza - 2018.pdf: 2593287 bytes, checksum: 75272a4ac609ad844ce539216911cb72 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / There are great expectations in CRAN and network virtualization (NFV) technologies, and especially in view of the potential they have to accelerate the deployment of new services while lowering the costs of network operators. Several papers discussed the benefits of deploying a new network infrastructure with such technologies, but only a few investigated how the transition from a legacy network could be. In this context, there is a relevant problem that involves three main issues: 1) which network locations should be updated; 2) how to update the selected location, \ie, to fully virtualized or not; and 3) who should attend virtualized sites. These issues are influenced by the level of centralization employed in a given access network (RAN). Here we propose two optimization models and two heuristics that allow the decision maker to define the desired level of centralization and to evaluate its impact on some metrics such as the investment needed and the level of centralization actually achieved. The models show how the investment should be applied according to the level of centralization and the relative cost between the different resources. Our heuristics present similar performance to the exact approach for relatively small scenarios of the problem, but are able to solve topologies of networks with large number of vertices and maintain a satisfactory solution close to the ideal. / Existem grandes expectativas nas tecnologias de centralização (CRAN) e de virtualização de rede (NFV), e especialmente diante do potencial que têm de acelerar a implantação de novos serviços e, ao mesmo tempo, diminuir os custos das operadoras de redes. Vários trabalhos discutiram os benefícios de se implantar uma nova infraestrutura de rede, com tais tecnologias, mas apenas alguns investigaram como poderia ser a transição a partir de uma rede legada. Nesse contexto, existe um problema relevante que envolve três questões principais: 1) quais locais da rede devem ser atualizados; 2) como atualizar o local selecionado, \ie, para totalmente virtualizado ou não; e 3) quem deve atender aos locais virtualizados. Essas questões são influenciadas pelo nível de centralização empregado em uma determinada rede de acesso (RAN). Aqui, propomos dois modelos de otimização e duas heurísticas que permitem ao tomador de decisão definir o nível de centralização desejado e avaliar seu impacto em algumas métricas, tais como o investimento necessário e o nível de centralização efetivamente alcançado. Os modelos mostram como o investimento deve ser aplicado de acordo com o nível de centralização e o custo relativo entre os diferentes recursos. Nossas heurísticas apresentam desempenho semelhante à abordagem exata para cenários relativamente pequenos do problema, mas são capazes de resolver topologias de redes com grande número de vértices e manter uma solução satisfatória próxima ao ideal.

Page generated in 0.0476 seconds