• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 54
  • 41
  • 39
  • 23
  • 16
  • 15
  • 13
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 745
  • 291
  • 279
  • 144
  • 100
  • 93
  • 90
  • 87
  • 79
  • 70
  • 65
  • 46
  • 44
  • 43
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Efficient Scaling of a Web Proxy Cluster

Zhang, Hao 27 October 2017 (has links) (PDF)
With the continuing growth in network traffic and increasing diversity in web content, web caching, together with various network functions (NFs), has been introduced to enhance security, optimize network performance, and save expenses. In a large enterprise network with more than tens of thousands of users, a single proxy server is not enough to handle a large number of requests and turns to group processing. When multiple web cache proxies are working as a cluster, they talk with each other and share cached objects by using internet cache protocol (ICP). This leads to poor scalability. This thesis describes the development of a framework that provides the efficient management of a distributed web cache. A controller is introduced into the cluster of proxy servers and becomes responsible for managing objects shared within the cluster. By obtaining a knowledge of global states from the controller, proxy servers that are working in the group do not need to query its neighbors' storage. This reduces traffic in the cluster and saves the computing resources of associated proxy servers. The evaluation on a caching proxy benchmark has shown that our approach demonstrates a superior scalability in comparison to an ICP web caching cluster.
602

Supporting Software Transactional Memory in Distributed Systems: Protocols for Cache-Coherence, Conflict Resolution and Replication

Zhang, Bo 05 December 2011 (has links)
Lock-based synchronization on multiprocessors is inherently non-scalable, non-composable, and error-prone. These problems are exacerbated in distributed systems due to an additional layer of complexity: multinode concurrency. Transactional memory (TM) is an emerging, alternative synchronization abstraction that promises to alleviate these difficulties. With the TM model, code that accesses shared memory objects are organized as transactions, which speculatively execute, while logging changes. If transactional conflicts are detected, one of the conflicting transaction is aborted and re-executed, while the other is allowed to commit, yielding the illusion of atomicity. TM for multiprocessors has been proposed in software (STM), in hardware (HTM), and in a combination (HyTM). This dissertation focuses on supporting the TM abstraction in distributed systems, i.e., distributed STM (or D-STM). We focus on three problem spaces: cache-coherence (CC), conflict resolution, and replication. We evaluate the performance of D-STM by measuring the competitive ratio of its makespan --- i.e., the ratio of its makespan (the last completion time for a given set of transactions) to the makespan of an optimal off-line clairvoyant scheduler. We show that the performance of D-STM for metric-space networks is O(N^2) for N transactions requesting an object under the Greedy contention manager and an arbitrary CC protocol. To improve the performance, we propose a class of location-aware CC protocols, called LAC protocols. We show that the combination of the Greedy manager and a LAC protocol yields an O(NlogN s) competitive ratio for s shared objects. We then formalize two classes of CC protocols: distributed queuing cache-coherence (DQCC) protocols and distributed priority queuing cache-coherence (DPQCC) protocols, both of which can be implemented using distributed queuing protocols. We show that a DQCC protocol is O(NlogD)-competitive and a DPQCC protocol is O(log D_delta)-competitive for N dynamically generated transactions requesting an object, where D_delta is the normalized diameter of the underlying distributed queuing protocol. Additionally, we propose a novel CC protocol, called Relay, which reduces the total number of aborts to O(N) for N conflicting transactions requesting an object, yielding a significantly improvement over past CC protocols which has O(N^2) total number of aborts. We also analyze Relay's dynamic competitive ratio in terms of the communication cost (for dynamically generated transactions), and show that Relay's dynamic competitive ratio is O(log D_0), where D_0 is the normalized diameter of the underlying network spanning tree. To reduce unnecessary aborts and increase concurrency for D-STM based on globally-consistent contention management policies, we propose the distributed dependency-aware (DDA) conflict resolution model, which adopts different conflict resolution strategies based on transaction types. In the DDA model, read-only transactions never abort by keeping a set of versions for each object. Each transaction only keeps precedence relations based on its local knowledge of precedence relations. We show that the DDA model ensures that 1) read-only transactions never abort, 2) every transaction eventually commits, 3) supports invisible reads, and 4) efficiently garbage collects useless object versions. To establish competitive ratio bounds for contention managers in D-STM, we model the distributed transactional contention management problem as the traveling salesman problem (TSP). We prove that for D-STM, any online, work conserving, deterministic contention manager provides an Omega(max[s,s^2/D]) competitive ratio in a network with normalized diameter D and s shared objects. Compared with the Omega(s) competitive ratio for multiprocessor STM, the performance guarantee for D-STM degrades by a factor proportional to s/D. We present a randomized algorithm, called Randomized, with a competitive ratio O(sClog n log ^{2} n) for s objects shared by n transactions, with a maximum conflicting degree C. To break this lower bound, we present a randomized algorithm Cutting, which needs partial information of transactions and an approximate TSP algorithm A with approximation ratio phi_A. We show that the average case competitive ratio of Cutting is O(s phi_A log^{2}m log^{2}n), which is close to O(s). Single copy (SC) D-STM keeps only one writable copy of each object, and thus cannot tolerate node failures. We propose a quorum-based replication (QR) D-STM model, which provides provable fault-tolerance without incurring high communication overhead, when compared with the SC model. The QR model stores object replicas in a tree quorum system, where two quorums intersect if one of them is a write quorum, and ensures the consistency among replicas at commit-time. The communication cost of an operation in the QR model is proportional to the communication cost from the requesting node to its closest read or write quorum. In the presence of node failures, the QR model exhibits high availability and degrades gracefully when the number of failed nodes increases, with reasonable higher communication cost. We develop a protoytpe implementation of the dissertation's proposed solutions, including DQCC and DPQCC protocols, Relay protocol, and the DDA model, in the HyFlow Java D-STM framework. We experimentally evaluated these solutions with respective competitor solutions on a set of microbenchmarks (e.g., data structures including distributed linked list, binary search tree and red-black tree) and macrobenchmarks (e.g., distributed versions of the applications in the STAMP STM benchmark suite for multiprocessors). Our experimental studies revealed that: 1) based on the same distributed queuing protocol (i.e., Ballistic CC protocol), DPQCC yields better transactional throughput than DQCC, by a factor of 50% - 100%, on a range of transactional workloads; 2) Relay outperforms competitor protocols (including Arrow, Ballistic and Home) by more than 200% when the network size and contention increase, as it efficiently reduces the average aborts per transaction (less than 0.5); 3) the DDA model outperforms existing contention management policies (including Greedy, Karma and Kindergarten managers) by upto 30%-40% in high contention environments; For read/write-balanced workloads, the DDA model outperforms these contention management policies by 30%-60% on average; for read-dominated workloads, the model outperforms by over 200%. / Ph. D.
603

Algorithms and Frameworks for Accelerating Security Applications on HPC Platforms

Yu, Xiaodong 09 September 2019 (has links)
Typical cybersecurity solutions emphasize on achieving defense functionalities. However, execution efficiency and scalability are equally important, especially for real-world deployment. Straightforward mappings of cybersecurity applications onto HPC platforms may significantly underutilize the HPC devices' capacities. On the other hand, the sophisticated implementations are quite difficult: they require both in-depth understandings of cybersecurity domain-specific characteristics and HPC architecture and system model. In our work, we investigate three sub-areas in cybersecurity, including mobile software security, network security, and system security. They have the following performance issues, respectively: 1) The flow- and context-sensitive static analysis for the large and complex Android APKs are incredibly time-consuming. Existing CPU-only frameworks/tools have to set a timeout threshold to cease the program analysis to trade the precision for performance. 2) Network intrusion detection systems (NIDS) use automata processing as its searching core and requires line-speed processing. However, achieving high-speed automata processing is exceptionally difficult in both algorithm and implementation aspects. 3) It is unclear how the cache configurations impact time-driven cache side-channel attacks' performance. This question remains open because it is difficult to conduct comparative measurement to study the impacts. In this dissertation, we demonstrate how application-specific characteristics can be leveraged to optimize implementations on various types of HPC for faster and more scalable cybersecurity executions. For example, we present a new GPU-assisted framework and a collection of optimization strategies for fast Android static data-flow analysis that achieve up to 128X speedups against the plain GPU implementation. For network intrusion detection systems (IDS), we design and implement an algorithm capable of eliminating the state explosion in out-of-order packet situations, which reduces up to 400X of the memory overhead. We also present tools for improving the usability of Micron's Automata Processor. To study the cache configurations' impact on time-driven cache side-channel attacks' performance, we design an approach to conducting comparative measurement. We propose a quantifiable success rate metric to measure the performance of time-driven cache attacks and utilize the GEM5 platform to emulate the configurable cache. / Doctor of Philosophy / Typical cybersecurity solutions emphasize on achieving defense functionalities. However, execution efficiency and scalability are equally important, especially for the real-world deployment. Straightforward mappings of applications onto High-Performance Computing (HPC) platforms may significantly underutilize the HPC devices’ capacities. In this dissertation, we demonstrate how application-specific characteristics can be leveraged to optimize various types of HPC executions for cybersecurity. We investigate several sub-areas, including mobile software security, network security, and system security. For example, we present a new GPU-assisted framework and a collection of optimization strategies for fast Android static data-flow analysis that achieve up to 128X speedups against the unoptimized GPU implementation. For network intrusion detection systems (IDS), we design and implement an algorithm capable of eliminating the state explosion in out-of-order packet situations, which reduces up to 400X of the memory overhead. We also present tools for improving the usability of HPC programming. To study the cache configurations’ impact on time-driven cache side-channel attacks’ performance, we design an approach to conducting comparative measurement. We propose a quantifiable success rate metric to measure the performance of time-driven cache attacks and utilize the GEM5 platform to emulate the configurable cache.
604

Analysis and Enforcement of Properties in Software Systems

Wu, Meng 02 July 2019 (has links)
Due to the lack of effective techniques for detecting and mitigating property violations, existing approaches to ensure the safety and security of software systems are often labor intensive and error prone. Furthermore, they focus primarily on functional correctness of the software code while ignoring micro-architectural details of the underlying processor, such as cache and speculative execution, which may undermine their soundness guarantees. To fill the gap, I propose a set of new methods and tools for ensuring the safety and security of software systems. Broadly speaking, these methods and tools fall into three categories. The first category is concerned with static program analysis. Specifically, I develop a novel abstract interpretation framework that considers both speculative execution and a cache model, and guarantees to be sound for estimating the execution time of a program and detecting side-channel information leaks. The second category is concerned with static program transformation. The goal is to eliminate side channels by equalizing the number of CPU cycles and the number of cache misses along all program paths for all sensitive variables. The third category is concerned with runtime safety enforcement. Given a property that may be violated by a reactive system, the goal is to synthesize an enforcer, called the shield, to correct the erroneous behaviors of the system instantaneously, so that the property is always satisfied by the combined system. I develop techniques to make the shield practical by handling both burst error and real-valued signals. The proposed techniques have been implemented and evaluated on realistic applications to demonstrate their effectiveness and efficiency. / Doctor of Philosophy / It is important for everything around us to follow some rules to work correctly. That is the same for our software systems to follow the security and safety properties. Especially, softwares may leak information via unexpected ways, e.g. the program timing, which makes it more difficult to be detected or mitigated. For instance, if the execution time of a program is related to the sensitive value, the attacker may obtain information about the sensitive value. On the other side, due to the complexity of software, it is nearly impossible to fully test or verify them. However, the correctness of software systems at runtime is crucial for critical applications. While existing approaches to find or resolve properties violation problem are often labor intensive and error prone, in this dissertation, I first propose an automated tool for detecting and mitigating the security vulnerability through program timing. Programs processed by the tool are guaranteed to be time constant under any sensitive values. I have also taken the influence of speculative execution, which is the cause behind recent Spectre and Meltdown attack, into consideration for the first time. To enforce the correctness of programs at runtime, I introduce an extra component that can be attached to the original system to correct any violation if it happens, thus the entire system will still be correct. All proposed methods have been evaluated on a variety of real world applications. The results show that these methods are effective and efficient in practice.
605

An Integrated End-User Data Service for HPC Centers

Monti, Henry Matthew 16 January 2013 (has links)
The advent of extreme-scale computing systems, e.g., Petaflop supercomputers, High Performance Computing (HPC) cyber-infrastructure, Enterprise databases, and experimental facilities such as large-scale particle colliders, are pushing the envelope on dataset sizes.  Supercomputing centers routinely generate and consume ever increasing amounts of data while executing high-throughput computing jobs. These are often result-datasets or checkpoint snapshots from long-running simulations, but can also be input data from experimental facilities such as the Large Hadron Collider (LHC) or the Spallation Neutron Source (SNS). These growing datasets are often processed by a geographically dispersed user base across multiple different HPC installations.  Moreover, end-user workflows are also increasingly distributed in nature with massive input, output, and even intermediate data often being transported to and from several HPC resources or end-users for further processing or visualization. The growing data demands of applications coupled with the distributed nature of HPC workflows, have the potential to place significant strain on both the storage and network resources at HPC centers. Despite this potential impact, rather than stringently managing HPC center resources, a common practice is to leave application-associated data management to the end-user, as the user is intimately aware of the application's workflow and data needs. This means end-users must frequently interact with the local storage in HPC centers, the scratch space, which is used for job input, output, and intermediate data. Scratch is built using a parallel file system that supports very high aggregate I/O throughput, e.g., Lustre, PVFS, and GPFS. To ensure efficient I/O and faster job turnaround, use of scratch by applications is encouraged.  Consequently, job input and output data are required to be moved in and out of the scratch space by end-users before and after the job runs, respectively. In practice, end-users arbitrarily stage and offload data as and when they deem fit, without any consideration to the center's performance, often leaving data on the scratch long after it is needed. HPC centers resort to "purge" mechanisms that sweep the scratch space to remove files found to be no longer in use, based on not having been accessed in a preselected time threshold called the purge window that commonly ranges from a few days to a week. This ad-hoc data management ignores the interactions between different users' data storage and transmission demands, and their impact on center serviceability leading to suboptimal use of precious center resources. To address the issues of exponentially increasing data sizes and ad-hoc data management, we present a fresh perspective to scratch storage management by fundamentally rethinking the manner in which scratch space is employed. Our approach is twofold. First, we re-design the scratch system as a "cache" and build "retention", "population", and "eviction"  policies that are tightly integrated from the start, rather than being add-on tools. Second, we aim to provide and integrate the necessary end-user data delivery services, i.e. timely offloading (eviction) and just-in-time staging (population), so that the center's scratch space usage can be optimized through coordinated data movement. Together, these two combined approaches create our Integrated End-User Data Service, wherein data transfer and placement on the scratch space are scheduled with job execution. This strategy allows us to couple job scheduling with cache management, thereby bridging the gap between system software tools and scratch storage management. It enables the retention of only the relevant data for the duration it is needed. Redesigning the scratch as a cache captures the current HPC usage pattern more accurately, and better equips the scratch storage system to serve the growing datasets of workloads. This is a fundamental paradigm shift in the way scratch space has been managed in HPC centers, and outweighs providing simple purge tools to serve a caching workload. / Ph. D.
606

Dynamics of La Crosse virus: Surveillance, Control and Effect on Vector Behavior

Yang, Fan 31 January 2017 (has links)
La Crosse virus (LACV) encephalitis is the most common and important endemic mosquito-borne disease of children in the U.S. with an estimated 300,000 annual infections. The disease is maintained in a zoonotic cycle involving the eastern treehole mosquito, Aedes triseriatus and small woodland mammals such as chipmunks and squirrels. The objectives of this study were 1) to conduct surveillance of LACV and other mosquito-borne viruses; 2) to evaluate the effect of virus infection on mosquito host-seeking and neurotransmitter levels, and 3) to determine the effectiveness of barrier sprays to control infected mosquito vectors. Our surveillance study demonstrated the involvement of an invasive species, Aedes japonicus, in the transmission cycle of Cache Valley virus (CVV). CVV is a mosquito-borne virus that is closely related to LACV. Thus, surveillance is a critical step in public health, providing pathogen distribution and frequency data as well as identifying and incriminating new vectors. LACV infection did not affect the host-seeking behavior of Ae. triseriatus females. Using high performance liquid chromatography with electrochemical detection (HPLC-ED), the levels of serotonin and dopamine were measured in infected and uninfected mosquitoes. Serotonin is known to affect blood-feeding and dopamine affects host-seeking. Serotonin levels were significantly lower in LACV-infected mosquitoes but dopamine levels were unaffected by virus. A previous study found that LACV infection caused an alteration in mosquito blood-feeding in a way that could enhance virus transmission. This work showed that LACV infection can reduce the level of serotonin in the mosquito, promoting virus transmission through altered blood-feeding without impairing the vector's ability to locate a host. Standard CDC bottle assays were used to evaluate the efficacy of two pyrethroids and two essential oil sprays on LACV infected and uninfected mosquitoes. LACV-infected Ae. triseriatus females were more susceptible to both pyrethroids than uninfected ones. Infection status did not affect the susceptibility of Ae. albopictus to either pyrethroid. The essential oils were inconsistent in their effects. These results demonstrate that barrier sprays may be a viable part of a mosquito control program, not just to reduce the biting rate but to potentially reduce the virus-infected portion of the vector population. / Ph. D.
607

HyFlow: A High Performance Distributed Software Transactional Memory Framework

Saad Ibrahim, Mohamed Mohamed 14 June 2011 (has links)
We present HyFlow - a distributed software transactional memory (D-STM) framework for distributed concurrency control. Lock-based concurrency control suffers from drawbacks including deadlocks, livelocks, and scalability and composability challenges. These problems are exacerbated in distributed systems due to their distributed versions which are more complex to cope with (e.g., distributed deadlocks). STM and D-STM are promising alternatives to lock-based and distributed lock-based concurrency control for centralized and distributed systems, respectively, that overcome these difficulties. HyFlow is a Java framework for DSTM, with pluggable support for directory lookup protocols, transactional synchronization and recovery mechanisms, contention management policies, cache coherence protocols, and network communication protocols. HyFlow exports a simple distributed programming model that excludes locks: using (Java 5) annotations, atomic sections are defiend as transactions, in which reads and writes to shared, local and remote objects appear to take effect instantaneously. No changes are needed to the underlying virtual machine or compiler. We describe HyFlow's architecture and implementation, and report on experimental studies comparing HyFlow against competing models including Java remote method invocation (RMI) with mutual exclusion and read/write locks, distributed shared memory (DSM), and directory-based D-STM. / Master of Science
608

Confused by Path: Analysis of Path Confusion Based Attacks

Mirheidari, Seyed Ali 12 November 2020 (has links)
URL parser and normalization processes are common and important operations in different web frameworks and technologies. In recent years, security researchers have targeted these processes and discovered high impact vulnerabilities and exploitation techniques. In a different approach, we will focus on semantic disconnect among different framework-independent web technologies (e.g., browsers, proxies, cache servers, web servers) which results in different URL interpretations. We coined the term “Path Confusion” to represent this disagreement and this thesis will focus on analyzing enabling factors and security impact of this problem.In this thesis, we will show the impact and importance of path confusion in two attack classes including Style Injection by Relative Path Overwrite (RPO) and Web Cache Deception (WCD). We will focus on these attacks as case studies to demonstrate how utilizing path confusion techniques makes targeted sites exploitable. Moreover, we propose novel variations of each attack which would expand the number of vulnerable sites and introduce new attack scenarios. We will present instances which have been secured against these attacks, while being still exploitable with introduced Path Confusion techniques. To further elucidate the seriousness of path confusion, we will also present the large scale analysis results of RPO and WCD attacks on high profile sites. We present repeatable methodologies and automated path confusion crawlers which detect thousands of sites that are still vulnerable to RPO or WCD only with specific types of path confusion techniques. Our results attest the severity of path confusion based class of attacks and how extensively they could hit the clients or systems. We analyze some browser-based mitigation techniques for RPO and discuss that WCD cannot be dealt as a common vulnerability of each component; instead it arises when an ecosystem of individually impeccable components ends up in a faulty situation.
609

The impact of innovative ICT technologies on the power consumption and CO2 emission of HTTP servers

Soler Domínguez, Sebastian January 2022 (has links)
The ICT technologies and their adoption from the population are growing fast, and the energy that this industry requires has followed the same trend, even considering all the improvements in efficiency during the last decades. This is because the increment in data centers and information outpaces all the efficiencies that have been adopted over the years. The HTTP servers have been optimizing data usage performance over the years, however, data centers still consume more and more energy due to the high demand they have. The objective of this study is to develop a tool that compares cache and non-cache servers' energy, hence CO2 emissions performances, using a simple and an advanced model. The simple model is based on a compilation of extensive data analysis including more detailed information and inputs, and the advanced model considers an energy consumption comparison between cache and non-cache technology. A database of CO2 emissions per MWh of 49 countries is created that forecasts this rate until 2030. The results show that cache servers are between 20% and 5% more efficient than non-cache in terms of energy consumption for files under 5MB. However, the efficiency level varies depending on the file size that is transferred. Therefore, improved ICT technology has the potential to reduce thousands of tons of CO2 per year if more websites adopt it. For example, an average news website with 300k visits per day could reduce around 150 tonCO2/year. / IKT-teknikerna och deras antagande från befolkningen växer snabbt, och den energi som denna industri kräver har följt samma trend, även med tanke på alla effektivitetsförbättringar under de senaste decennierna. Detta beror på att ökningen av datacenter och information överträffar alla effektivitetsvinster som har antagits under åren. HTTP-servrarna har optimerat dataanvändningsprestanda under åren, men datacenter förbrukar fortfarande mer och mer energi på grund av den höga efterfrågan de har. Syftet med denna studie är att utveckla ett verktyg som jämför cache- och icke-cache-servrars energi, därav CO2-utsläppsprestanda, med hjälp av en enkel och en avancerad modell. Den enkla modellen är baserad på en sammanställning av omfattande dataanalyser inklusive mer detaljerad information och indata, och den avancerade modellen tar hänsyn till en energiförbrukningsjämförelse mellan cache- och icke-cache-teknik. En databas med CO2-utsläpp per MWh för 49 länder skapas som prognostiserar denna takt fram till 2030. Resultaten visar att cacheservrar är mellan 20% och 5% effektivare än icke-cache vad gäller energiförbrukning för filer under 5MB. Effektivitetsnivån varierar dock beroende på filstorleken som överförs. Därför har förbättrad IKT-teknik potential att minska tusentals ton CO2 per år om fler webbplatser använder den. Till exempel kan en genomsnittlig nyhetswebbplats med 300 000 besök per dag minska cirka 150 ton CO2/år.
610

Jämförelse av cache-tjänster: WSUS Och LanCache / Comparison of cache services: WSUS and LanCache

Shammaa, Mohammad Hamdi, Aldrea, Sumaia January 2023 (has links)
Inom nätverkstekniken och datakommunikationen råder idag en tro på tekniken nätverkscache som kan spara data för att senare kunna hämta hem det snabbare. Tekniken har genom åren visat att den effektivt kan skicka den önskade data till sina klienter. Det finns flera cache-tjänster som använder tekniken för Windows-uppdateringar. Bland dessa finns Windows Server Update Services (WSUS) och LanCache. På uppdrag från företaget TNS Gaming AB jämförs dessa tjänster med varandra under examensarbetet. Nätverkscache är ett intressant forskningsområde för framtida kommunikationssystem och nätverk tack vare sina fördelar. Likaså är uppgiften om att jämföra cache-tjänsterna WSUS och LanCache intressant i och med det öppnar upp insikt om vilken tjänst är bättre för företaget eller andra intressenter. Både forskningsområdet och uppgiften är viktiga och intressanta när användare vill effektivisera användningen av internetanslutningen och bespara nätverksresurser. Därmed kan tekniken minska nedladdningstiden. Till det här arbetet besvaras frågor om vilken nätverksprestanda, resursanvändning och administrationstid respektive cache-tjänst har, och vilken cache-tjänst som lämpar sig bättre för företagets behov. I arbetet genomförs experiment, som omfattar tre huvudmättningar, och följs av en enfallstudie. Syftet med arbetet är att med hjälp av experimentets mätningar få en jämförelse mellan WSUS och LanCache. Resultatet av arbetet utgör sedan ett underlag för det framtida lösningsvalet. Resultaten består av två delar. Den första visar att båda cache-tjänsterna bidrar till kortare nedladdningstider. Den andra är att LanCache är bättre än WSUS när det gäller nätverksprestanda och resursanvändning, samt mindre administrationstid jämfört med WSUS. Givet resultat dras slutsatsen att LanCache är cache-tjänsten som är mest lämpad i det här fallet. / In the field of network technology and data communication, there is a current belief in the technology of network caching, which can store data to later retrieve it more quickly. Over the years, this technology has proven its ability to efficiently deliver the desired data to its clients. There are several caching services that utilize this technology for Windows updates, among them are Windows Server Update Services (WSUS) and LanCache. On behalf of the company TNS Gaming AB, these services are compared to each other in this thesis. Network caching is an interesting area of research for future communication systems and networks due to its benefits. Likewise, the task of comparing the cache services WSUS and LanCache is interesting as it provides insights into which service is better suited for the company or other stakeholders. Both the research area and the task are important and intriguing when users seek to streamline the use of their internet connection and conserve network resources. Thus, the technology can reduce download times. For this work, questions about the network performance, resource usage, and administration time of each cache service are answered, as well as which cache service that is better suited to the company's needs. The work involves conducting experiments, including three main measurements, followed by a single case study. The purpose of the work is to compare WSUS and LanCache using the measurements from the experiment. The outcome of the work then forms a basis for future solution choice. The results consist of two parts. The first shows that both cache services contribute to shorter download times. The second is that LanCache outperforms WSUS in terms of network performance and resource usage, and also requires less administration time than WSUS. Given the results, the conclusion is drawn that LanCache is the most suitable caching service in this case.

Page generated in 0.0375 seconds