• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Implementation of Cache Attack on Real Information Centric Networking System

Anto Morais, Faustina J. 01 January 2018 (has links)
Network security is an ongoing major problem in today’s Internet world. Even though there have been simulation studies related to denial of service and cache attacks, studies of attacks on real networks are still lacking in the research. In this thesis, the effects of cache attacks in real information-centric networking systems were investigated. Cache attacks were implemented in real networks with different cache sizes and with Least Recently Used, Random and First In First Out algorithms to fill the caches in each node. The attacker hits the cache with unpopular content, making the user request that the results be fetched from web servers. The cache hit, time taken to get the result, and number of hops to serve the request were calculated with real network traffic. The results of the implementation are provided for different topologies and are compared with the simulation results.
172

Optimization algorithms for video service delivery

ABOUSABEA, Emad Mohamed Abd Elrahman 12 September 2012 (has links) (PDF)
The aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
173

Design and Implementation of an Out-of-Core Globe Rendering System Using Multiple Map Services / Design och Implementering av ett Out-of-Core Globrenderingssystem Baserat på Olika Karttjänster

Bladin, Kalle, Broberg, Erik January 2016 (has links)
This thesis focuses on the design and implementation of a software system enabling out-of-core rendering of multiple map datasets mapped on virtual globes around our solar system. Challenges such as precision, accuracy, curvature and massive datasets were considered. The result is a globe visualization software using a chunked level of detail approach for rendering. The software can render texture layers of various sorts to aid in scientific visualization on top of height mapped geometry, yielding accurate visualizations rendered at interactive frame rates. The project was conducted at the American Museum of Natural History (AMNH), New York and serves the goal of implementing a planetary visualization software to aid in public presentations and bringing space science to the public. The work is part of the development of the software OpenSpace, which is the result of a collaboration between Linköping University, AMNH and the National Aeronautics and Space Administration (NASA) among others.
174

Bootstrapping a Private Cloud

Deepika Kaushal (9034865) 29 June 2020 (has links)
Cloud computing allows on-demand provision, configuration and assignment of computing resources with minimum cost and effort for users and administrators. Managing the physical infrastructure that underlies cloud computing services relies on the need to provision and manage bare-metal computer hardware. Hence there is a need for quick loading of operating systems in bare-metal and virtual machines to service the demands of users. The focus of the study is on developing a technique to load these machines remotely, which is complicated by the fact that the machines can be present in different Ethernet broadcast domains, physically distant from the provisioning server. The use of available bare-metal provisioning frameworks require significant skills and time. Moreover, there is no easily implementable standard method of booting across separate and different Ethernet broadcast domains. This study proposes a new framework to provision bare-metal hardware remotely using layer 2 services in a secure manner. This framework is a composition of existing tools that can be assembled to build the framework.
175

Sécurité pour les réseaux sans fil / Security for wireless communications

Kamel, Sarah 10 March 2017 (has links)
Aujourd’hui, le renforcement de la sécurité des systèmes de communications devient une nécessité, par anticipation du développement des ordinateurs quantiques et des nouvelles attaques qui en découleront. Cette thèse explore deux techniques complémentaires permettant d’assurer la confidentialité des données transmises sur des liens sans-fils. Dans la première partie de ce travail, nous nous intéressons au schéma de cryptographie à clé publique basée sur des réseaux de points, qui représente une des techniques les plus prometteuses pour la cryptographie post-quantique. En particulier, nous considérons le cryptosystème Goldreich-Goldwasser-Halevi (GGH), pour lequel nous proposons un nouveau schéma utilisant les GLD. Dans la seconde partie de ce travail, nous étudions la sécurité des canaux de diffusion multi-utilisateur, ayant accès à des mémoires de caches, en présence d'un espion. Nous considérons deux contraintes de sécurité: la contrainte de sécurité individuelle et la contrainte de sécurité jointe. Nous dérivons des bornes supérieure et inférieure pour le compromis sécurisé capacité-mémoire en considérant différentes distributions de cache. Afin d'obtenir la borne inférieure, nous proposons plusieurs schémas de codage combinant codage wiretap, codage basé sur la superposition et codage piggyback. Nous prouvons qu'il est plus avantageux d'allouer la mémoire de cache aux récepteurs les plus faibles. / Today, there is a real need to strengthen the communication security to anticipate the development of quantum computing and the eventual attacks arising from it. This work explores two complementary techniques that provide confidentiality to data transmitted over wireless networks. In the first part, we focus on lattice-based public-key cryptography, which is one of the most promising techniques for the post-quantum cryptography systems. In particular, we focus on the Goldreich-Goldwasser-Halevi (GGH) cryptosystem, for which we propose a new scheme using GLD lattices. In the second part of this work, we study the security of multi-user cache-aided wiretap broadcast channels (BCs) against an external eavesdropper under two secrecy constraints: individual secrecy constraint and joint secrecy constraint. We compute upper and lower bounds on secure capacity-memory tradeoff considering different cache distributions. To obtain the lower bound, we propose different coding schemes that combine wiretap coding, superposition coding and piggyback coding. We prove that allocation of the cache memory to the weaker receivers is the most beneficial cache distribution scenario.
176

Subtree Hashing of Tests in Build Systems : Rust Tricorder / Subträd Hashing av tester i byggsystem : Rust Tricorder

Capitanu, Calin January 2023 (has links)
Software applications are built by teams of developers that constantly iterate over the codebase. Software projects rely on a build system, which handles the management of dependencies, compilation, testing, and deployment of the software. The execution of the tests during each build allow developers to validate that their changes do not introduce regressions. However, the execution of the test suite during each build can take a long time, potentially impacting the development process. To facilitate quicker feedback, build systems use incremental building in order to avoid the reprocessing of unmodified artifacts. This is achieved by maintaining a cache of source files, and only rebuilding artifacts that differ from their cached version. Yet, changing any part of a source file invalidates the cache, triggering the re-execution of unmodified tests. This focus over an entire file can be misleading to the build system, as it can not determine whether the actual function being tested has changed, thus invoking redundant re-testing. In this thesis, we propose a finer-grained approach to caching within build systems, by caching components within the Abstract Syntax Tree instead of entire source files. We compare their hashes on subsequent runs, in order to identify components that have changed. The potential advantage of this strategy is that re-running a specific test that has not been modified will leverage the use of caches even if the file that contains it has been modified. We implement our approach in a system called TRICORDER, and integrate it within a build system called WARP. TRICORDER works by analyzing RUST source code in order to identify the test cases that have not been changed, such as through the addition of comments, or modifications of unrelated functions. This can benefit developers by avoiding the re-execution of tests that are unmodified. We evaluate our approach against 4 notable, open-source RUST projects, targeting a set of 16 tests within them. We first analyze the accuracy with which TRICORDER detects the internal dependencies of a test function, which is needed for the code slicing done by TRICORDER, in order to cache code items related to the target test function. We then introduce artificial changes to our study subjects in order to determine whether or not TRICORDER indicates tests that need to be re-run. Finally, we analyze the ability of TRICORDER to identify real changes based on the commit history of our study subjects. Our results show that the more granular approach to caching can avoid the unnecessary recompilation and re-execution of test cases. An important direction for future work is to extend the current implementation to support the entire set of RUST features in order to evaluate TRICORDER on a larger set of study subjects. / Programvaruapplikationer byggs av utvecklingsteam som ständigt itererar över kodbasen. Programvaruprojekt förlitar sig på ett byggsystem som hanterar beroenden, kompilering, testning och implementering av programvaran. Utförande av testerna under varje byggprocess möjliggör för utvecklare att validera att deras ändringar inte introducerar regressionsfel. Dock kan utförningen av testsviten under varje byggprocess ta lång tid och potentiellt påverka utvecklingsprocessen. För att underlätta snabbare återkoppling använder byggsystemen inkrementell byggning för att undvika onödig återbearbetning av oförändrade artefakter. Detta uppnås genom att bibehålla en cache av källkodsfilerna och endast bygga om artefakter som skiljer sig från deras cachade version. Att ändra vilken del som helst av en källkodsfil invaliderar cachet och utlöser körningen av oförändrade tester. Fokuseringen på en hel fil kan vara vilseledande för byggsystemet, då det inte kan avgöra om den faktiska funktionen som testas har ändrats och därigenom påbörjar onödig omtestning. I detta projekt föreslår vi en mer detaljerad cache-strategi inom byggsystem, genom att cacha komponenter inom det abstrakta syntaxträdet istället för hela källkodsfiler. Vi jämför deras hash-värden vid senare körningar för att identifiera ändringar. Den potentiella fördelen med denna strategi är när man kör om ett specifikt test som inte har ändrats kan cachen användas även om filen som innehåller testet har modifierats. Vi implementerar vår metod i ett system som kallas TRICORDER och integrerar det i ett byggsystem som heter WARP. TRICORDER fungerar genom att analysera RUST-källkod för att identifiera testfall som inte har ändrats, till exempel genom tillägg av kommentarer eller ändringar av irrelevanta funktioner. Detta kan gynna utvecklare genom att undvika att köra om tester som inte har ändrats. Vi utvärderar vår metod mot 4 välkända öppen källkodsprojekt i RUST och riktar in oss på en uppsättning av 16 tester inom dem. Först analyserar vi noggrannheten med vilken TRICORDER identifierar de interna beroendena hos en testfunktion, vilket behövs för kodavskärningen som TRICORDER utför för att cachelagra kodenheter relaterade till måltestfunktionen. Sedan inför vi konstgjorda ändringar i våra studieobjekt för att avgöra om TRICORDER indikerar tester som behöver köras om. Slutligen analyserar vi TRICORDER förmåga att identifiera verkliga ändringar baserat på ändringshistoriken för våra studieobjekt. Våra resultat visar att den mer granulära cachelagringsmetoden kan undvika onödig omkompilering och omkörning av testfall. En viktig riktning för framtida arbete är att utöka den nuvarande implementationen för att stödja hela uppsättningen av RUST-funktioner för att utvärdera TRICORDER på en större uppsättning studieobjekt. / Aplicațiile software sunt dezvoltate de programatori care iterează constant asupra codului. Proiectele de software se bazează pe un sistem de generare care gestionează dependențele, compilarea, testarea și lansarea software-ului. Execuția testelor permite dezvoltatorilor să valideze că modificările lor nu introduc regresii. Cu toate acestea, execuția testelor în cadrul fiecărei generări poate dura mult timp, având potențialul de a incetinii dezvoltarea. Pentru a facilita o reprocesare mai rapidă, sistemele de generare utilizează construirea incrementală pentru a evita reprelucrarea a artefactelor nemodificate. Acest lucru se realizează prin menținerea unei cache și reconstruirea doar a artefactelor care diferă de cele din cache. Cu toate acestea, orice modificare a unui fișier sursă invalidează cache-ul, declanșând reprocesarea. Focalizarea asupra unui fișier întreg poate induce în eroare sistemul de generare, deoarece nu poate determina dacă funcția testată a suferit modificări, declanșând astfel teste redundante. În această teză, propunem o abordare mai detaliată a cache-ului în cadrul sistemelor de generare, prin cacharea componentelor Arborelui Sintactic Abstract, în locul întregilor fișiere sursă. Comparăm hash-urile acestora în rulările ulterioare pentru a identifica componentele modificate. Avantajul potențial al acestei strategii constă în faptul că reexecutarea unui test care nu a nemodificat va utiliza cache-urile chiar dacă fișierul a fost modificat. Implementăm abordarea noastră într-un sistem numit TRICORDER și îl integrăm într-un sistem de construire numit WARP. TRICORDER funcționează prin analizarea codului sursă RUST pentru a identifica cazurile de testare care nu au fost modificate, cum ar fi prin adăugarea de comentarii sau modificări ale funcțiilor nerelevante. Acest lucru poate fi benefic pentru dezvoltatori, evitând reexecutarea testelor care nu au fost modificate. Evaluăm abordarea noastră în raport cu 4 proiecte notabile open-source în RUST, având în vedere un set de 16 teste în cadrul acestora. Mai întâi, analizăm precizia cu care TRICORDER detectează dependențele interne ale unei funcții de testare, ceea ce este necesar pentru tăierea de cod realizată de TRICORDER, pentru a memora în cache elementele de cod legate de funcția de testare țintă. Apoi, introducem modificări artificiale în subiecții noștri de studiu pentru a determina dacă TRICORDER indică sau nu teste care trebuie reluate. În final, analizăm capacitatea TRICORDER de a identifica schimbări reale pe baza istoricului de angajări al subiecților noștri de studiu. Rezultatele noastre arată că abordarea mai granulară a memorării în cache poate evita recompilarea și reexecutarea inutilă a cazurilor de testare. O direcție importantă pentru viitor este extinderea implementării curente pentru a sprijini întregul set de caracteristici RUST, pentru a evalua TRICORDER pe un set mai mare de subiecți de studiu.
177

Database System Acceleration on FPGAs

Moghaddamfar, Mehdi 30 May 2023 (has links)
Relational database systems provide various services and applications with an efficient means for storing, processing, and retrieving their data. The performance of these systems has a direct impact on the quality of service of the applications that rely on them. Therefore, it is crucial that database systems are able to adapt and grow in tandem with the demands of these applications, ensuring that their performance scales accordingly. In the past, Moore's law and algorithmic advancements have been sufficient to meet these demands. However, with the slowdown of Moore's law, researchers have begun exploring alternative methods, such as application-specific technologies, to satisfy the more challenging performance requirements. One such technology is field-programmable gate arrays (FPGAs), which provide ideal platforms for developing and running custom architectures for accelerating database systems. The goal of this thesis is to develop a domain-specific architecture that can enhance the performance of in-memory database systems when executing analytical queries. Our research is guided by a combination of academic and industrial requirements that seek to strike a balance between generality and performance. The former ensures that our platform can be used to process a diverse range of workloads, while the latter makes it an attractive solution for high-performance use cases. Throughout this thesis, we present the development of a system-on-chip for database system acceleration that meets our requirements. The resulting architecture, called CbMSMK, is capable of processing the projection, sort, aggregation, and equi-join database operators and can also run some complex TPC-H queries. CbMSMK employs a shared sort-merge pipeline for executing all these operators, which results in an efficient use of FPGA resources. This approach enables the instantiation of multiple acceleration cores on the FPGA, allowing it to serve multiple clients simultaneously. CbMSMK can process both arbitrarily deep and wide tables efficiently. The former is achieved through the use of the sort-merge algorithm which utilizes the FPGA RAM for buffering intermediate sort results. The latter is achieved through the use of KeRRaS, a novel variant of the forward radix sort algorithm introduced in this thesis. KeRRaS allows CbMSMK to process a table a few columns at a time, incrementally generating the final result through multiple iterations. Given that acceleration is a key objective of our work, CbMSMK benefits from many performance optimizations. For instance, multi-way merging is employed to reduce the number of merge passes required for the execution of the sort-merge algorithm, thus improving the performance of all our pipeline-breaking operators. Another example is our in-depth analysis of early aggregation, which led to the development of a novel cache-based algorithm that significantly enhances aggregation performance. Our experiments demonstrate that CbMSMK performs on average 5 times faster than the state-of-the-art CPU-based database management system MonetDB.:I Database Systems & FPGAs 1 INTRODUCTION 1.1 Databases & the Importance of Performance 1.2 Accelerators & FPGAs 1.3 Requirements 1.4 Outline & Summary of Contributions 2 BACKGROUND ON DATABASE SYSTEMS 2.1 Databases 2.1.1 Storage Model 2.1.2 Storage Medium 2.2 Database Operators 2.2.1 Projection 2.2.2 Filter 2.2.3 Sort 2.2.4 Aggregation 2.2.5 Join 2.2.6 Operator Classification 2.3 Database Queries 2.4 Impact of Acceleration 3 BACKGROUND ON FPGAS 3.1 FPGA 3.1.1 Logic Element 3.1.2 Block RAM (BRAM) 3.1.3 Digital Signal Processor (DSP) 3.1.4 IO Element 3.1.5 Programmable Interconnect 3.2 FPGADesignFlow 3.2.1 Specifications 3.2.2 RTL Description 3.2.3 Verification 3.2.4 Synthesis, Mapping, Placement, and Routing 3.2.5 TimingAnalysis 3.2.6 Bitstream Generation and FPGA Programming 3.3 Implementation Quality Metrics 3.4 FPGA Cards 3.5 Benefits of Using FPGAs 3.6 Challenges of Using FPGAs 4 RELATED WORK 4.1 Summary of Related Work 4.2 Platform Type 4.2.1 Accelerator Card 4.2.2 Coprocessor 4.2.3 Smart Storage 4.2.4 Network Processor 4.3 Implementation 4.3.1 Loop-based implementation 4.3.2 Sort-based Implementation 4.3.3 Hash-based Implementation 4.3.4 Mixed Implementation 4.4 A Note on Quantitative Performance Comparisons II Cache-Based Morphing Sort-Merge with KeRRaS (CbMSMK) 5 OBJECTIVES AND ARCHITECTURE OVERVIEW 5.1 From Requirements to Objectives 5.2 Architecture Overview 5.3 Outlineof Part II 6 COMPARATIVE ANALYSIS OF OPENCL AND RTL FOR SORT-MERGE PRIMITIVES ON FPGAS 6.1 Programming FPGAs 6.2 RelatedWork 6.3 Architecture 6.3.1 Global Architecture 6.3.2 Sorter Architecture 6.3.3 Merger Architecture 6.3.4 Scalability and Resource Adaptability 6.4 Experiments 6.4.1 OpenCL Sort-Merge Implementation 6.4.2 RTLSorters 6.4.3 RTLMergers 6.4.4 Hybrid OpenCL-RTL Sort-Merge Implementation 6.5 Summary & Discussion 7 RESOURCE-EFFICIENT ACCELERATION OF PIPELINE-BREAKING DATABASE OPERATORS ON FPGAS 7.1 The Case for Resource Efficiency 7.2 Related Work 7.3 Architecture 7.3.1 Sorters 7.3.2 Sort-Network 7.3.3 X:Y Mergers 7.3.4 Merge-Network 7.3.5 Join Materialiser (JoinMat) 7.4 Experiments 7.4.1 Experimental Setup 7.4.2 Implementation Description & Tuning 7.4.3 Sort Benchmarks 7.4.4 Aggregation Benchmarks 7.4.5 Join Benchmarks 7. Summary 8 KERRAS: COLUMN-ORIENTED WIDE TABLE PROCESSING ON FPGAS 8.1 The Scope of Database System Accelerators 8.2 Related Work 8.3 Key-Reduce Radix Sort(KeRRaS) 8.3.1 Time Complexity 8.3.2 Space Complexity (Memory Utilization) 8.3.3 Discussion and Optimizations 8.4 Architecture 8.4.1 MSM 8.4.2 MSMK: Extending MSM with KeRRaS 8.4.3 Payload, Aggregation and Join Processing 8.4.4 Limitations 8.5 Experiments 8.5.1 Experimental Setup 8.5.2 Datasets 8.5.3 MSMK vs. MSM 8.5.4 Payload-Less Benchmarks 8.5.5 Payload-Based Benchmarks 8.5.6 Flexibility 8.6 Summary 9 A STUDY OF EARLY AGGREGATION IN DATABASE QUERY PROCESSING ON FPGAS 9.1 Early Aggregation 9.2 Background & Related Work 9.2.1 Sort-Based Early Aggregation 9.2.2 Cache-Based Early Aggregation 9.3 Simulations 9.3.1 Datasets 9.3.2 Metrics 9.3.3 Sort-Based Versus Cache-Based Early Aggregation 9.3.4 Comparison of Set-Associative Caches 9.3.5 Comparison of Cache Structures 9.3.6 Comparison of Replacement Policies 9.3.7 Cache Selection Methodology 9.4 Cache System Architecture 9.4.1 Window Aggregator 9.4.2 Compressor & Hasher 9.4.3 Collision Detector 9.4.4 Collision Resolver 9.4.5 Cache 9.5 Experiments 9.5.1 Experimental Setup 9.5.2 Resource Utilization and Parameter Tuning 9.5.3 Datasets 9.5.4 Benchmarks on Synthetic Data 9.5.5 Benchmarks on Real Data 9.6 Summary 10 THE FULL PICTURE 10.1 System Architecture 10.2 Benchmarks 10.3 Meeting the Objectives III Conclusion 11 SUMMARY AND OUTLOOK ON FUTURE RESEARCH 11.1 Summary 11.2 Future Work BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES
178

Jämförelse av cache-tjänster: WSUS Och LanCache / Comparison of cache services: WSUS and LanCache

Shammaa, Mohammad Hamdi, Aldrea, Sumaia January 2023 (has links)
Inom nätverkstekniken och datakommunikationen råder idag en tro på tekniken nätverkscache som kan spara data för att senare kunna hämta hem det snabbare. Tekniken har genom åren visat att den effektivt kan skicka den önskade data till sina klienter. Det finns flera cache-tjänster som använder tekniken för Windows-uppdateringar. Bland dessa finns Windows Server Update Services (WSUS) och LanCache. På uppdrag från företaget TNS Gaming AB jämförs dessa tjänster med varandra under examensarbetet. Nätverkscache är ett intressant forskningsområde för framtida kommunikationssystem och nätverk tack vare sina fördelar. Likaså är uppgiften om att jämföra cache-tjänsterna WSUS och LanCache intressant i och med det öppnar upp insikt om vilken tjänst är bättre för företaget eller andra intressenter. Både forskningsområdet och uppgiften är viktiga och intressanta när användare vill effektivisera användningen av internetanslutningen och bespara nätverksresurser. Därmed kan tekniken minska nedladdningstiden. Till det här arbetet besvaras frågor om vilken nätverksprestanda, resursanvändning och administrationstid respektive cache-tjänst har, och vilken cache-tjänst som lämpar sig bättre för företagets behov. I arbetet genomförs experiment, som omfattar tre huvudmättningar, och följs av en enfallstudie. Syftet med arbetet är att med hjälp av experimentets mätningar få en jämförelse mellan WSUS och LanCache. Resultatet av arbetet utgör sedan ett underlag för det framtida lösningsvalet. Resultaten består av två delar. Den första visar att båda cache-tjänsterna bidrar till kortare nedladdningstider. Den andra är att LanCache är bättre än WSUS när det gäller nätverksprestanda och resursanvändning, samt mindre administrationstid jämfört med WSUS. Givet resultat dras slutsatsen att LanCache är cache-tjänsten som är mest lämpad i det här fallet. / In the field of network technology and data communication, there is a current belief in the technology of network caching, which can store data to later retrieve it more quickly. Over the years, this technology has proven its ability to efficiently deliver the desired data to its clients. There are several caching services that utilize this technology for Windows updates, among them are Windows Server Update Services (WSUS) and LanCache. On behalf of the company TNS Gaming AB, these services are compared to each other in this thesis. Network caching is an interesting area of research for future communication systems and networks due to its benefits. Likewise, the task of comparing the cache services WSUS and LanCache is interesting as it provides insights into which service is better suited for the company or other stakeholders. Both the research area and the task are important and intriguing when users seek to streamline the use of their internet connection and conserve network resources. Thus, the technology can reduce download times. For this work, questions about the network performance, resource usage, and administration time of each cache service are answered, as well as which cache service that is better suited to the company's needs. The work involves conducting experiments, including three main measurements, followed by a single case study. The purpose of the work is to compare WSUS and LanCache using the measurements from the experiment. The outcome of the work then forms a basis for future solution choice. The results consist of two parts. The first shows that both cache services contribute to shorter download times. The second is that LanCache outperforms WSUS in terms of network performance and resource usage, and also requires less administration time than WSUS. Given the results, the conclusion is drawn that LanCache is the most suitable caching service in this case.
179

Radio resource sharing with edge caching for multi-operator in large cellular networks

Sanguanpuak, T. (Tachporn) 04 January 2019 (has links)
Abstract The aim of this thesis is to devise new paradigms on radio resource sharing including cache-enabled virtualized large cellular networks for mobile network operators (MNOs). Also, self-organizing resource allocation for small cell networks is considered. In such networks, the MNOs rent radio resources from the infrastructure provider (InP) to support their subscribers. In order to reduce the operational costs, while at the same time to significantly increase the usage of the existing network resources, it leads to a paradigm where the MNOs share their infrastructure, i.e., base stations (BSs), antennas, spectrum and edge cache among themselves. In this regard, we integrate the theoretical insights provided by stochastic geometrical approaches to model the spectrum and infrastructure sharing for large cellular networks. In the first part of the thesis, we study the non-orthogonal multi-MNO spectrum allocation problem for small cell networks with the goal of maximizing the overall network throughput, defined as the expected weighted sum rate of the MNOs. Each MNO is assumed to serve multiple small cell BSs (SBSs). We adopt the many-to-one stable matching game framework to tackle this problem. We also investigate the role of power allocation schemes for SBSs using Q-learning. In the second part, we model and analyze the infrastructure sharing system considering a single buyer MNO and multiple seller MNOs. The MNOs are assumed to operate over their own licensed spectrum bands while sharing BSs. We assume that multiple seller MNOs compete with each other to sell their infrastructure to a potential buyer MNO. The optimal strategy for the seller MNOs in terms of the fraction of infrastructure to be shared and the price of the infrastructure, is obtained by computing the equilibrium of a Cournot-Nash oligopoly game. Finally, we develop a game-theoretic framework to model and analyze a cache-enabled virtualized cellular networks where the network infrastructure, e.g., BSs and cache storage, owned by an InP, is rented and shared among multiple MNOs. We formulate a Stackelberg game model with the InP as the leader and the MNOs as the followers. The InP tries to maximize its profit by optimizing its infrastructure rental fee. The MNO aims to minimize the cost of infrastructure by minimizing the cache intensity under probabilistic delay constraint of the user (UE). Since the MNOs share their rented infrastructure, we apply a cooperative game concept, namely, the Shapley value, to divide the cost among the MNOs. / Tiivistelmä Tämän väitöskirjan tavoitteena on tuottaa uusia paradigmoja radioresurssien jakoon, mukaan lukien virtualisoidut välimuisti-kykenevät suuret matkapuhelinverkot matkapuhelinoperaattoreille. Näiden kaltaisissa verkoissa operaattorit vuokraavat radioresursseja infrastruktuuritoimittajalta (InP, infrastructure provider) asiakkaiden tarpeisiin. Toimintakulujen karsiminen ja samanaikainen olemassa olevien verkkoresurssien hyötykäytön huomattava kasvattaminen johtaa paradigmaan, jossa operaattorit jakavat infrastruktuurinsa keskenään. Tämän vuoksi työssä tutkitaan teoreettisia stokastiseen geometriaan perustuvia malleja spektrin ja infrastruktuurin jakamiseksi suurissa soluverkoissa. Työn ensimmäisessä osassa tutkitaan ei-ortogonaalista monioperaattori-allokaatioongelmaa pienissä soluverkoissa tavoitteena maksimoida verkon yleistä läpisyöttöä, joka määritellään operaattoreiden painotettuna summaläpisyötön odotusarvona. Jokaisen operaattorin oletetaan palvelevan useampaa piensolutukiasemaa (SBS, small cell base station). Työssä käytetään monelta yhdelle -vakaata sovituspeli-viitekehystä SBS:lle käyttäen Q-oppimista. Työn toisessa osassa mallinnetaan ja analysoidaan infrastruktuurin jakamista yhden ostaja-operaattorin ja monen myyjä-operaattorin tapauksessa. Operaattorien oletetaan toimivan omilla lisensoiduilla taajuuksillaan jakaen tukiasemat keskenään. Myyjän optimaalinen strategia infrastruktuurin myytävän osan suuruuden ja hinnan suhteen saavutetaan laskemalla Cournot-Nash -olipologipelin tasapainotila. Lopuksi, työssä kehitetään peli-teoreettinen viitekehys virtualisoitujen välimuistikykenevien soluverkkojen mallintamiseen ja analysointiin, missä InP:n omistama verkkoinfrastruktuuri vuokrataan ja jaetaan monen operaattorin kesken. Työssä muodostetaan Stackelberg-pelimalli, jossa InP toimii johtajana ja operaattorit seuraajina. InP pyrkii maksimoimaan voittonsa optimoimalla infrastruktuurin vuokrahintaa. Operaattori pyrkii minimoimaan infrastruktuurin hinnan minimoimalla välimuistin tiheyttä satunnaisen käyttäjän viive-ehtojen mukaisesti. Koska operaattorit jakavat vuokratun infrastruktuurin, työssä käytetään yhteistyöpeli-ajatusta, nimellisesti, Shapleyn arvoa, jakamaan kustannuksia operaatoreiden kesken.
180

Securing Wireless Communication via Information-Theoretic Approaches: Innovative Schemes and Code Design Techniques

Shoushtari, Morteza 21 June 2023 (has links) (PDF)
Historically, wireless communication security solutions have heavily relied on computational methods, such as cryptographic algorithms implemented in the upper layers of the network stack. Although these methods have been effective, they may not always be sufficient to address all security threats. An alternative approach for achieving secure communication is the physical layer security approach, which utilizes the physical properties of the communication channel through appropriate coding and signal processing. The goal of this Ph.D. dissertation is to leverage the foundations of information-theoretic security to develop innovative and secure schemes, as well as code design techniques, that can enhance security and reliability in wireless communication networks. This dissertation includes three main phases of investigation. The first investigation analyzes the finite blocklength coding problem for the wiretap channel model which is equipped with the cache. The objective was to develop and analyze a new wiretap coding scheme that can be used for secure communication of sensitive data. Secondly, an investigation was conducted into information-theoretic security solutions for aeronautical mobile telemetry (AMT) systems. This included developing a secure coding technique for the integrated Network Enhanced Telemetry (iNET) communications system, as well as examining the potential of post-quantum cryptography approaches as future secrecy solutions for AMT systems. The investigation focused on exploring code-based techniques and evaluating their feasibility for implementation. Finally, the properties of nested linear codes in the wiretap channel model have been explored. Investigation in this phase began by exploring the duality relationship between equivocation matrices of nested linear codes and their corresponding dual codes. Then a new coding algorithm to construct the optimum nested linear secrecy codes has been invented. This coding algorithm leverages the aforementioned duality relationship by starting with the worst nested linear secrecy codes from the dual space. This approach enables us to derive the optimal nested linear secrecy code more efficiently and effectively than through a brute-force search for the best nested linear secrecy codes directly.

Page generated in 0.0379 seconds