• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 10
  • 8
  • 8
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 181
  • 74
  • 37
  • 36
  • 32
  • 27
  • 26
  • 25
  • 25
  • 22
  • 22
  • 20
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Performance analysis and optimisation of in-network caching for information-centric future Internet

Wang, Haozhe January 2017 (has links)
The rapid development in wireless technologies and multimedia services has radically shifted the major function of the current Internet from host-centric communication to service-oriented content dissemination, resulting a mismatch between the protocol design and the current usage patterns. Motivated by this significant change, Information-Centric Networking (ICN), which has been attracting ever-increasing attention from the communication networks research community, has emerged as a new clean-slate networking paradigm for future Internet. Through identifying and routing data by unified names, ICN aims at providing natural support for efficient information retrieval over the Internet. As a crucial characteristic of ICN, in-network caching enables users to efficiently access popular contents from on-path routers equipped with ubiquitous caches, leading to the enhancement of the service quality and reduction of network loads. Performance analysis and optimisation has been and continues to be key research interests of ICN. This thesis focuses on the development of efficient and accurate analytical models for the performance evaluation of ICN caching and the design of optimal caching management schemes under practical network configurations. This research starts with the proposition of a new analytical model for caching performance under the bursty multimedia traffic. The bursty characteristic is captured and the closed formulas for cache hit ratio are derived. To investigate the impact of topology and heterogeneous caching parameters on the performance, a comprehensive analytical model is developed to gain valuable insight into the caching performance with heterogeneous cache sizes, service intensity and content distribution under arbitrary topology. The accuracy of the proposed models is validated by comparing the analytical results with those obtained from extensive simulation experiments. The analytical models are then used as cost-efficient tools to investigate the key network and content parameters on the performance of caching in ICN. Bursty traffic and heterogeneous caching features have significant influence on the performance of ICN. Therefore, in order to obtain optimal performance results, a caching resource allocation scheme, which leverages the proposed model and targets at minimising the total traffic within the network and improving hit probability at the nodes, is proposed. The performance results reveal that the caching allocation scheme can achieve better caching performance and network resource utilisation than the default homogeneous and random caching allocation strategy. To attain a thorough understanding of the trade-off between the economic aspect and service quality, a cost-aware Quality-of-Service (QoS) optimisation caching mechanism is further designed aiming for cost-efficiency and QoS guarantee in ICN. A cost model is proposed to take into account installation and operation cost of ICN under a realistic ISP network scenario, and a QoS model is presented to formulate the service delay and delay jitter in the presence of heterogeneous service requirements and general probabilistic caching strategy. Numerical results show the effectiveness of the proposed mechanism in achieving better service quality and lower network cost. In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN and investigate the key performance metrics. Leveraging the insights discovered by the analytical models, the proposed caching management schemes are able to optimise and enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out.
52

Distributed Caching in a Multi-Server Environment : A study of Distributed Caching mechanisms and an evaluation of Distributed Caching Platforms available for the .NET Framework

Herber, Robert January 2010 (has links)
This paper discusses the problems Distributed Caching can be used to solve and evaluates a couple of Distributed Caching Platforms targeting the .NET Framework. Basic concepts and functionality that is general for all distributed caching platforms is covered in chapter 2. We discuss how Distributed Caching can resolve synchronization problems when using multiple local caches, how a caching tier can relieve the database and improve the scalability of the system, and also how memory consumption can be reduced by storing data distributed. A couple of .NET-based caching platforms are evaluated and tested, these are Microsoft AppFabric Caching, ScaleOut StateServer and Alachisoft NCache. For a quick overview see the feature comparison-table in chapter 3 and for the main advantages and disadvantages of each platform see section 6.1. The benchmark results shows the difference in read performance, between local caching and distributed caching as well as distributed caching with a coherent local cache, for each evaluated Caching Platform. Local caching frameworks and database read times are included for comparison. These benchmark results are in chapter 5.
53

Content Delivery in Fog-Aided Small-Cell Systems with Offline and Online Caching: An Information—Theoretic Analysis

Azimi, Seyyed, Simeone, Osvaldo, Tandon, Ravi 18 July 2017 (has links)
The storage of frequently requested multimedia content at small-cell base stations (BSs) can reduce the load of macro-BSs without relying on high-speed backhaul links. In this work, the optimal operation of a system consisting of a cache-aided small-cell BS and a macro-BS is investigated for both offline and online caching settings. In particular, a binary fading one-sided interference channel is considered in which the small-cell BS, whose transmission is interfered by the macro-BS, has a limited-capacity cache. The delivery time per bit (DTB) is adopted as a measure of the coding latency, that is, the duration of the transmission block, required for reliable delivery. For offline caching, assuming a static set of popular contents, the minimum achievable DTB is characterized through information-theoretic achievability and converse arguments as a function of the cache capacity and of the capacity of the backhaul link connecting cloud and small-cell BS. For online caching, under a time-varying set of popular contents, the long-term (average) DTB is evaluated for both proactive and reactive caching policies. Furthermore, a converse argument is developed to characterize the minimum achievable long-term DTB for online caching in terms of the minimum achievable DTB for offline caching. The performance of both online and offline caching is finally compared using numerical results.
54

The effects of micro data centres for multi-service access nodes on latency and services

van Wyk, David January 2017 (has links)
Latency is becoming a significant factor in many Internet applications such as P2P sharing and online gaming. Coupled with the fact that an increasing number of people are using online services for backup and replication purposes and it is clear that congestion increases exponentially on the network. One of the ways in which the latency problem can be solved is to remove core network congestion or to limit it in such a way that it does not pose a problem. In South Africa, Telkom rolled out MSAN cabinets as part of their Fibre-to-the-curb (FTTC) upgrades. This created an unique opportunity to provide new services, like BaRaaS, by implementing micro data centres within the MSAN to reduce congestion on the core network. It is important to have background knowledge on what exactly latency is and what causes it on a network. It is also essential to have an understanding of how congestion (and thus latency) can be avoided on a network. The background literature covered helps to determine which tools are available to do this, as well as to highlight any possible gaps that exist for new congestion control mechanisms. A simulation study was performed to determine whether implementing micro data centres inside the MSAN will in fact reduce latency. Simulations must be done as realistically as possible to ensure that the results can be correlated to a real-world problem. Two different simulations were performed to model the behaviour of the network when backup and replication data is sent to the Internet and when it is sent to a local MSAN. In both models the core network throughput as well as the Round Trip Times (RTTs) from the client to the Internet and the MSAN cabinets, were recorded. The RTT results were then used to determine whether latency had been reduced. Once it was established that micro data centres will indeed help in reducing congestion and latency on the network, the design of a storage server, for inclusion inside the MSAN cabinet, was done. A cost benefit analysis was also performed to ensure that the project will be financially viable in the long term. The cost analysis took into account all the costs associated with the project and then expanded them over a certain period of time to determine initial expenses. Extra information was then taken into consideration to determine the possible income per year as well as extra expenditure. It was found that the inclusion of a micro data centre reduces latency on the core network due to the removal of large backup data traffic from the core network, which reduces congestion and improves latency. From the Cost Benefit Analysis (CBA) it was found that the BaRaaS service is viable from a subscription point of view. Finally, the relevant conclusions with regard to the effects of data centres in MSAN cabinets on latency and services were drawn. / Vertraagtyd word 'n belangrike faktor in baie Internet toepassings soos P2P-deel en aanlyn-speletjies. Gekoppel met die feit dat 'n toenemende getal mense internetdienste gebruik vir rugsteun en replisering, word opeenhoping in die datanetwerk eksponensieel verhoog. Een van die maniere waarop die vertraagtydsprobleem opgelos kan word, is om opeenhoping in die kern-datanetwerk te verwyder of om dit op so 'n manier te beperk dat dit nie 'n probleem veroorsaak nie. In Suid Afrika het Telkom MSAN-kaste uitgerol as deel van hulle "Fibre-to-the-Curb" (FTTC) opgraderings. Dit het 'n unieke geleentheid geskep om nuwe dienste te skep, soos BaRaaS, deur mikro-datasentrums in die MSAN-kas te implementeer om opeenhoping in die kernnetwerk te verminder. Dit is belangrik om agtergrondkennis te hê van presies wat vertraagtyd is en waardeur dit op die netwerk veroorsaak word. Dit is ook belangrik om 'n begrip te hê van hoe opeenhoping (en dus vertraagtyd) op die netwerk vermy kan word. Die agtergrondsliteratuur wat gedek is help om te bepaal watter instrumente beskikbaar is, asook om moontlikhede na vore te bring vir nuwe meganismes om opeenhoping te beheer. 'n Simulasiestudie is uitgevoer om vas te stel of die insluiting van datasentrums in die MSAN-kaste inderdaad 'n verskil sal maak aan die vertraagtyd in die datanetwerk. Twee simulasies is uitgevoer om die gedrag van die netwerk te modelleer wanneer rugsteun- en repliseringsdata na onderskeidelik die Internet en die plaaslike MSAN gestuur word. In altwee is die deurset van die kernnetwerk sowel as die sogenaamde Round Trip Times (RTTs) van die kliënt na die Internet en die MSAN-kaste aangeteken. Die RTTs-resultate sal gebruik word om te bepaal of vertraagtyd verminder is. Nadat dit bepaal is dat mikro-datasentrums wel die opeenhoping in die netwerk sal verminder, is die ontwerp van 'n stoorbediener gedoen, vir insluiting in die MSAN-kas. 'n Koste-ontleding neem alle koste wat met die projek verband hou in ag en versprei dit dan oor 'n bepaalde tydperk om die aanvanklike kostes te bepaal. Verdere inligting word voorts in ag geneem om die moontlike inkomste per jaar sowel as addisionele uitgawes te bepaal. Daar is bevind dat die insluiting van 'n mikro-datasentrum vertraagtyd verminder deur groot rugsteen-dataverkeer van die kernnetwerk af te verwyder. Die koste-ontleding het gewys dat uit 'n subskripsie-oogpunt, die BaRaaS diens lewensvatbaar is. Uiteindelik word relevante gevoltrekkings gemaak oor die effek van datasentrums in MSAN-kaste op vertraagtyd en dienste. / Dissertation (MEng)--University of Pretoria, 2017. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
55

Improving Storage with Stackable Extensions

Guerra, Jorge 13 July 2012 (has links)
Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to “squeeze” more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage exten- sions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome de- velopment process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other ex- tensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the stor- age extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experi- mental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.
56

Edge Caching for Small Cell Networks

Pervej, Md Ferdous 01 August 2019 (has links)
An idea of storing contents, such as media files, music files, movie clips, etc. is simple yet challenging in terms of required effort to make it count. Some of the benefits of pre-storing the contents are reduced delay of accessing/downloading a content, reduced load to the centralized servers and of course, a higher data rate. However, several challenges need to be addressed to achieve these benefits. Among many, some of the fundamentals are limited storage capacity, storing the right content and minimizing the costs. This thesis aims to address these challenges. First, a framework for predicting the proper contents that need to be stored to the limited storage capacity is presented. Then, the cost is minimized considering several real-world scenarios. While doing that, all possible collaborations among the local nodes are performed to ensure high performance. Therefore, the goal of this thesis is to come up with a solution to the content storing problems so that the network cost is minimized.
57

Limites Fondamentales De Stockage Dans Les Réseaux Sans Fil / Fundamental Limits of Coded Caching in Wireless Networks

Ghorbel, Asma 13 April 2018 (has links)
Le stockage de contenu populaire dans des caches disponibles aux utilisateurs, est une technique émergente qui permet de réduire le trafic dans les réseaux sans fil. En particulier, le coded caching proposée par Maddah-Ali et Niesen a été considéré comme une approche prometteuse pour atteindre un temps de livraison constant au fur et à mesure que la dimension augmente. Toutefois, plusieurs limitations empêchent ses applications. Nous avons adressé les limitations de coded caching dans les réseaux sans fil et avons proposé des schémas de livraison qui exploitent le gain de coded caching. Dans la première partie de la thèse, nous étudions la région de capacité pour un canal à effacement avec cache et retour d'information. Nous proposons un schéma et prouvons son optimalité pour des cas particuliers. Ces résultats sontgénéralisés pour le canal à diffusion avec desantennes multiples et retour d'information. Dans la deuxième partie, nous étudions la livraison de contenu sur un canal d'atténuation asymétrique, où la qualité du canal varie à travers les utilisateurs et le temps. En supposant que les demandes des utilisateurs arrivent de manière dynamique, nous concevons un schéma basé sur une structure de queues et nous prouvons qu’il maximise la fonction d'utilité par rapport à tous les schémas limités au cache décentralisé. Dans la dernière partie, nous étudions la planification opportuniste pour un canal d'atténuation asymétrique, en assurant une métrique de justice entre des utilisateurs. Nous proposons une politique de planification simple à base de seuil avec une complexité linéaire et qui exige seulement un bit de retour de chaque utilisateur. / Caching, i.e. storing popular contents at caches available at end users, has received a significant interest as a technique to reduce the peak traffic in wireless networks. In particular, coded caching proposed by Maddah-Ali and Niesen has been considered as a promising approach to achieve a constant delivery time as the dimension grows. However, several limitations prevent its applications in practical wireless systems. Throughout the thesis, we address the limitations of classical coded caching in various wireless channels. Then, we propose novel delivery schemes that exploit opportunistically the underlying wireless channels while preserving partly the promising gain of coded caching. In the first part of the thesis, we study the achievable rate region of the erasure broadcast channel with cache and state feedback. We propose an achievable schemeand prove its optimality for special cases of interest. These results are generalized to the multi-antenna broadcast channel with state feedback. In the second part, we study the content delivery over asymmetric block-fading broadcast channels, where the channel quality varies across users and time. Assuming that user requests arrive dynamically, we design an online scheme based on queuing structure and prove that it maximizes the alpha-fair utility among all schemes restricted to decentralized placement. In the last part, we study opportunistic scheduling over the asymmetric fading broadcast channel and aim to design a scalable delivery scheme while ensuring fairness among users. We propose a simple threshold-based scheduling policy of linear complexity that requires only a one-bit feedback from each user.
58

Radiance Caching with Environment Maps

Buerli, Michael 01 June 2013 (has links) (PDF)
The growing demand for realistic renderings in both film and games has led to a number of proposed solutions to the Global Illumination problem. In order to imitate natural lighting, it is necessary to gather indirect illumination of the surrounding environment for lighting computations. This is a computationally expensive problem, requiring the sampling or rasterization of the hemisphere surrounding each ray intersection, to which there is no standardized solution. In this thesis we propose a new method of approximation using environment maps for caching radiance. The proposed method leverages a voxelized scene representation for storing direct illumination and a cache of environment maps for integrating indirect illumination. By using a voxelized scene to gather indirect lighting contributions and caching these contributions spatially, we are able to achieve fast and convincing renders of large complex scenes. The result of our implementation produces images comparable to those of existing Monte Carlo integration methods with render speeds a magnitude or more faster.
59

A Longitudinal Evaluation of HTTP Traffic

Callahan, Thomas Richard 22 May 2012 (has links)
No description available.
60

A Framework for Performance Optimization of TensorContraction Expressions

Lai, Pai-Wei January 2014 (has links)
No description available.

Page generated in 0.0686 seconds