Spelling suggestions: "subject:"¿¿¿¿¿¿lite""
261 |
Physical Cell ID Allocation in Cellular NetworksNyberg, Sofia January 2016 (has links)
In LTE networks, there are several properties that need to be carefully planned for thenetwork to be well performing. As the networks’ traffic increases and the networks aregetting denser, this planning gets even more important. The Physical Cell Id (PCI) is theidentifier of a network cell in the physical layer. This property is limited to 504 values, andtherefore needs to be reused in the network. If the PCI assignment is poorly planned, therisk for network conflicts is high. In this work, the aim is to develop a distributed approach where the planning is performedby the cells involved in the specific conflict. Initially, the PCI allocation problem isformulated mathematically and is proven to be NP-complete by a reduction to the vertexcolouring problem. Two optimisation models are developed which are minimising thenumber of PCI changes and the number of PCIs used within the network respectively. An approach is developed which enlargers the traditional decision basis for a distributedapproach by letting the confused cell request neighbour information from its confusioncausingneighbours. The approach is complemented with several decision rules for theconfused cell to be able to make an as good decision as possible and by that mimic the behaviourof the optimisation models. Three different algorithms are implemented using theapproach as a basis. For evaluation purpose, two additional algorithms are implemented,one which is applicable to today’s standard and one inspired by the work by Amirijoo et al. The algorithms were tested against three different test scenarios where the PCI rangewas narrowed, the number of cells was increased and the network was extended. Thealgorithms were also tested on a small world network. The testing showed promisingresults for the approach, especially for larger and denser networks.
|
262 |
Méthodologies pour l'évaluation de performance système à grand échelle avec applications au système LTE / Scalable system level evaluations for LTE using PHY abstractionLatif, Imran 28 August 2013 (has links)
L'objectif de cette thèse est de souligner l'importance de l'abstraction de la couche physique (PHY abstraction) dans l'évaluation des systèmes LTE (Long Term Evolution). Cette thèse propose une approche pragmatique pour l'utilisation de PHY abstraction dans les simulateurs des systèmes LTE. PHY abstraction est un outil très important pour l'évaluation des systèmes LTE à grande échelle car il est efficace, pratique et à complexité réduite . Dans cette thèse, nous prouvons que, à part son objectif principal et qui consiste à fournir un indicateur instantané de la qualité de liaison pour l'évaluation du système, le PHY abstraction peut aussi: améliorer le feedback de l'indicateur sur la qualité de canal (CQI) en se basant sur les différentes configurations d'antennes, et la prédiction de la performance des réseaux LTE en se basant sur des mesures de canal réelles. Cette thèse est principalement divisée en deux parties: méthodologies et applications. La première partie présente la conception complète et la méthodologie de validation des systèmes de captage PHY pour différentes configurations d'antennes correspondant à des différentes modes de transmissions en LTE. La validation est effectuée en utilisant des simulateurs de niveau de liaison. Nous soulignons aussi les astuces de calibrage nécessaires pour que la production PHY soit précise dans la prédiction de la performance de capacité réalisant turbo-codes. / The main focus of this thesis is to highlight the importance of PHY abstraction for the system level evaluations in the framework of 3GPP Long Term Evolution (LTE) networks. This thesis presents a pragmatic approach towards the use of PHY abstraction in LTE based system level simulators. PHY abstraction is an extremely valuable low complexity tool for efficient and realistic large scale system evaluations. This thesis shows that apart from the primary purpose of PHY abstraction of providing instantaneous link quality indicator for the purpose of system level evaluations, it can be further used for an improved channel quality indicator (CQI) feedback based on the different antenna configurations and for the performance prediction of LTE networks based on the real life channel measurements. This thesis is mainly divided into two parts; methodologies and applications. The first part presents the complete design and validation methodology of PHY abstraction schemes for various antennaconfigurations corresponding to different transmission modes in LTE. The validation is performed using link level simulators and it also highlights the calibration issues necessary for the PHY abstraction to be accurate in predicting the performance of capacity achieving turbo codes.
|
263 |
Délestage de données en D2D : de la modélisation à la mise en oeuvre / Device-to-device data Offloading : from model to implementationRebecchi, Filippo 18 September 2015 (has links)
Le trafic mobile global atteindra 24,3 exa-octets en 2019. Accueillir cette croissance dans les réseaux d’accès radio devient un véritable casse-tête. Nous porterons donc toute notre attention sur l'une des solutions à ce problème : le délestage (offloading) grâce à des communications de dispositif à dispositif (D2D). Notre première contribution est DROiD, une stratégie qui exploite la disponibilité de l'infrastructure cellulaire comme un canal de retour afin de suivre l'évolution de la diffusion d’un contenu. DROiD s’adapte au rythme de la diffusion, permettant d'économiser une quantité élevée de données cellulaires, même dans le cas de contraintes de réception très serrées. Ensuite, nous mettons l'accent sur les gains que les communications D2D pourraient apporter si elles étaient couplées avec les transmissions multicast. Par l’utilisation équilibrée d'un mix de multicast, et de communications D2D, nous pouvons améliorer, à la fois, l'efficacité spectrale ainsi que la charge du réseau. Afin de permettre l’adaptation aux conditions réelles, nous élaborons une stratégie d'apprentissage basée sur l'algorithme dit ‘’bandit manchot’’ pour identifier la meilleure combinaison de communications multicast et D2D. Enfin, nous mettrons en avant des modèles de coûts pour les opérateurs, désireux de récompenser les utilisateurs qui coopèrent dans le délestage D2D. Nous proposons, pour cela, de séparer la notion de seeders (utilisateurs qui transportent contenu, mais ne le distribuent pas) et de forwarders (utilisateurs qui sont chargés de distribuer le contenu). Avec l'aide d’un outil analytique basée sur le principe maximal de Pontryagin, nous développons une stratégie optimale de délestage. / Mobile data traffic is expected to reach 24.3 exabytes by 2019. Accommodating this growth in a traditional way would require major investments in the radio access network. In this thesis, we turn our attention to an unconventional solution: mobile data offloading through device-to-device (D2D) communications. Our first contribution is DROiD, an offloading strategy that exploits the availability of the cellular infrastructure as a feedback channel. DROiD adapts the injection strategy to the pace of the dissemination, resulting at the same time reactive and relatively simple, allowing to save a relevant amount of data traffic even in the case of tight delivery delay constraints.Then, we shift the focus to the gains that D2D communications could bring if coupled with multicast wireless networks. We demonstrate that by employing a wise balance of multicast and D2D communications we can improve both the spectral efficiency and the load in cellular networks. In order to let the network adapt to current conditions, we devise a learning strategy based on the multi-armed bandit algorithm to identify the best mix of multicast and D2D communications. Finally, we investigate the cost models for operators wanting to reward users who cooperate in D2D offloading. We propose separating the notion of seeders (users that carry content but do not distribute it) and forwarders (users that are tasked to distribute content). With the aid of the analytic framework based on Pontryagin's Maximum Principle, we develop an optimal offloading strategy. Results provide us with an insight on the interactions between seeders, forwarders, and the evolution of data dissemination.
|
264 |
Multipath Mitigation and TOA Estimation for LTE-Sidelink PositioningDaffron, Isaac 09 July 2019 (has links)
No description available.
|
265 |
A Multi-User Coordination Scheme for LTE Indoor Positioning SystemVemuri, Krishna Karthik January 2020 (has links)
No description available.
|
266 |
Fuzz testing on eNodeB over the air interface : Using fuzz testing as a means of testing securityPestrea, Anna January 2021 (has links)
In modern society, security has become an increasingly important subject, as technologyhas become an integrated part of everyday life. The security of a system can be tested withthe help of fuzzing, where incoming messages to the system are altered. In this thesis, afuzzer was developed targeting an E-UTRAN Node B (eNB) in the Long-Term Evolution(LTE) landscape. The eNB is current prototype and is from the company Ericsson. Thefuzzer is particularly designed for testing the Medium Access Control (MAC) layer of theeNB. The fuzzer uses a genetic method where all of the fuzzer’s flags (the R, F2, E, LCID, Fand L flags) are triggered during the fuzzing period. Depending on the output of the firstgeneration of fuzzed values, new values are generated either by choosing a value close tothe original value, or by choosing a value that belong to the same subgroup as the originalvalue. Four test cases are made, where first test case is the base line of the program and theother three test cases fuzzes the eNB, using different parts of the fuzzer. The results show that depending on which parts of the fuzzer are used, the connectionbecomes different. For test two and three, the connection became increasingly unstable andmore data was present in the connection. Test case four did not however deviate so muchfrom the baseline, if compared to test two and three.
|
267 |
Ontology based framework for Tactile Internet and Digital Twin ApplicationsAdhami, Hikmat 09 August 2022 (has links)
In the era of Industry 4 and Digital Twin – DT- (integrating Audio-Video, Virtual Reality,
Augmented Reality and Haptics - from the Greek word Haptikos meaning "able to touch") and the Tactile Internet (TI), it becomes obvious that telecom stakeholders need different networks requirements to provision high quality services with respect to the new standards. In reality, this era is proposed as TI, and it will achieve a true paradigm shift from content delivery to skill-set delivery network types, thanks to recent technical breakthroughs. It will build a new internet structure with improved capabilities; but it will be difficult to meet the technical needs of the TI with current fourth generation (4G) mobile communication systems. As a result, 5G mobile
communication systems will be used at the wireless edge and as a key enabler for TI due to its automated core network functionalities.
Because of the COVID-19 outbreak, most daily activities such as employment, research, and education are now conducted online rather than in person. As a result, internet traffic has risen dramatically. Nowadays, Tactile Internet is in its infancy deployment phase worldwide. For this reason, and because of the growing need of its applications, the feasibility of these applications on the existing and deployed networks infrastructures, especially in the growing countries, is thought
to be very hard, even quasi-impossible. Since 5G is not reaching yet its convergence stage (i.e. it is not deployed everywhere) and there is a huge stress on mobile communications given that the world is still facing the COVID-19 Pandemic, and since all the activities are taking place online, we propose design and implement a QoS framework to facilitate the feasibility and the applicability of the TI systems, where no 5G infrastructure is deployed. This framework will predict the most suitable network type to be deployed for certain given TI applications with certain given KPIs (Key Performance Indicators). Also, this framework is scalable, in such it gives an idea of even the future Next Generation Mobile Networks types (NGMN, if necessary).
“To deal” with TI applications, means “to deal” with Haptics added to Audio and Video streams. Therefore, performance evaluation for haptic networks is required. And since there are different types of haptic networks, so interoperability is needed. Consequently, a standardization form is necessary for that purpose, to annotate and describe the haptic network. The first idea that flashes in mind, is the use of Ontologies. In these latters, we can add intelligent rules to infer additional data and predict resource requirements in order to achieve better performance. Many works in the research rely on Artificial Intelligence approaches to tackle the above-mentioned
standardization, but very few depend on ontologies, and without futuristic outcomes, especially for the optimization problem. We mean by optimization, the optimal types, methods and rules that are able to accommodate the applicability of the TI systems (here come the applications KPIs) in an acceptable environment or infrastructure (here come the networking KPIs), and even-more, to infer the most optimal network type.
To help manufacturing companies take full advantage of the TI, we propose to develop new methods and tools (ontologies) to intelligently handle the TI, DT (Digital Twin) and IoT (Internet of Things) sensor data and process data at the edge of the network and deliver faster insights. The outcomes of these ontologies, have been validated through two conducted case studies, where we simulated, in the first, TI traffic over Wi-Fi, WiMAX and UMTS (3G) infrastructures; While in the second we used 4G (LTE-A), along with SDN (Software Defined Networking) integrated to MEC (Mobile Edge Computing) as networking backbone. The results, in terms of QoS KPIs performance evaluation, present high relevance to our proposed Ontology outcomes.
|
268 |
Machine Learning-Enabled Radio Resource Management for Next-Generation Wireless NetworksElsayed, Medhat 27 July 2021 (has links)
A new era of wireless networks is evolving, thanks to the significant advances in communications and networking technologies. In parallel, wireless services are witnessing a tremendous change due to increasingly heterogeneous and stringent demands, whose quality of service requirements are expanding in several dimensions, putting pressure on mobile networks. Examples of those services are augmented and virtual reality, as well as self-driving cars. Furthermore, many physical systems are witnessing a dramatic shift into autonomy by enabling the devices of those systems to communicate and transfer control and data information among themselves. Examples of those systems are microgrids, vehicles, etc. As such, the mobile network indeed requires a revolutionary shift in the way radio resources are assigned to those services, i.e., RRM.
In RRM, radio resources such as spectrum and power are assigned to users of the network according to various metrics such as throughput, latency, and reliability. Several methods have been adopted for RRM such as optimization-based methods, heuristics and so on. However, these methods are facing several challenges such as complexity, scalability, optimality, ability to learn dynamic environments. In particular, a common problem in conventional RRM methods is the failure to adapt to the changing situations. For example, optimization-based methods perform well under static network conditions, where an optimal solution is obtained for a snapshot of the network. This leads to higher complexity as the network is required to solve the optimization at every time slot. Machine learning constitutes a promising tool for RRM with the aim to address the conflicting objectives, i.e., KPIs, complexity, scalability, etc.
In this thesis, we study the use of reinforcement learning and its derivatives for improving network KPIs. We highlight the advantages of each reinforcement learning method under the studied network scenarios. In addition, we highlight the gains and trade-offs among the proposed learning techniques as well as the baseline methods that rely on either optimization or heuristics. Finally, we present the challenges facing the application of reinforcement learning to wireless networks and propose some future directions and open problems toward an autonomous wireless network.
The contributions of this thesis can be summarized as follows. First, reinforcement learning methods, and in particular model-free Q-learning, experience large convergence time due to the large state-action space. As such, deep reinforcement learning was employed to improve generalization and speed up the convergence. Second, the design of the state and reward functions impact the performance of the wireless network. Despite the simplicity of this observation, it turns out to be a key one for designing autonomous wireless systems. In particular, in order to facilitate autonomy, agents need to have the ability to learn/adjust their goals. In this thesis, we propose transfer in reinforcement learning to address this point, where knowledge is transferred between expert and learner agents with simple and complex tasks, respectively. As such, the learner agent aims to learn a more complex task using the knowledge transferred from an expert performing a simpler (partial) task.
|
269 |
Improving the Energy Efficiency of Cellular IoT DeviceAbbas, Muhammad Tahir January 2023 (has links)
Cellular Internet of Things (CIoT) has emerged as a promising technology to support applications that generate infrequent data. One requirement on these applications, often battery-powered devices, is low energy consumption to enable extended battery life. Narrowband IoT (NB-IoT) is a promising technology for IoT due to its low power consumption, which is essential for devices that need to run on battery power for extended periods. However, the current battery life of NB-IoT devices is only a few years, which is insufficient for many applications. This thesis investigates the impact of energy-saving mechanisms standardized by 3GPP on battery life of NB-IoT devices. The main research objective is to classify and analyze existing energy-saving solutions for CIoT and examine their limitations, to study the impact of standardized energy-saving mechanisms on the battery life of NB-IoT devices, both in isolation and combined, and to provide guidelines on how to configure NB-IoT devices to reduce energy consumption efficiently. The research aims to provide a deeper understanding of the effect of energy-saving mechanisms and best practices to balance energy efficiency and performance of NB-IoT devices. Applying the proposed solutions makes it possible to achieve a battery life of 10~years or more for CIoT devices.
|
270 |
Anomaly Detection and Root Cause Analysis for LTE Radio Base Stations / Anomalitetsdetektion och grundorsaksanalys för LTE Radio Base-stationerLópez, Sergio January 2018 (has links)
This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of "performance maintenance counters". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue. / Detta projekt syftar till att upptäcka möjliga anomalier i resursförbrukningen hos radiobasstationer inom 4G LTE Radio-arkitekturen. Detta har gjorts genom att analysera de statistiska data som varje nod genererar var 15:e minut, i form av PM-räknare (PM = Performance Maintenance). I denna avhandling introducerar vi metoder som låter resurser över-vakas automatiskt efter programuppdateringar, för att upptäcka eventuella avvikelser i resursförbrukningen jämfört med referensperioden före uppdateringen. Dessutom försöker vi också avgränsa ursprunget till anomalier genom att peka ut parametrar som är potentiellt kopplade till problemet.
|
Page generated in 0.0349 seconds