21 |
Design and Evaluation of Enhanced Network Caching Systems to Improve Content Delivery in the Internet / Conception et évaluation de systèmes de caching de réseau pour améliorer la distribution des contenus sur InternetAraldo, Andrea Giuseppe 07 October 2016 (has links)
Le caching de réseau peut aider àgérer l'explosion du trafic sur Internet et àsatisfaire la Qualité d'Expérience (QoE)croissante demandée par les usagers.Néanmoins, les techniques proposées jusqu'àprésent par la littérature scientifique n'arriventpas à exploiter tous les avantages potentiels. Lestravaux de recherche précédents cherchent àoptimiser le hit ratio ou d'autres métriques deréseau, tandis que les opérateurs de réseau(ISPs) sont plus intéressés à des métriques plusconcrètes, par exemple le coût et la qualitéd'expérience (QoE). Pour cela, nous visonsdirectement l'optimisation des métriquesconcrètes et montrons que, ce faisant, on obtientdes meilleures performances.Plus en détail, d'abord nous proposons desnouvelles techniques de caching pour réduire lecoût pour les ISPs en préférant stocker lesobjets qui sont les plus chères à repérer.Nous montrons qu'un compromis existe entre lamaximisation classique du hit ratio et laréduction du coût.Ensuite, nous étudions la distribution vidéo,comme elle est la plus sensible à la QoE etconstitue la plus part du trafic Internet. Lestechniques de caching classiques ignorent sescaractéristiques particulières, par exemple le faitqu'une vidéo est représentée par différentesreprésentations, encodées en différents bit-rateset résolutions. Nous introduisons des techniquesqui prennent en compte cela.Enfin, nous remarquons que les techniquescourantes assument la connaissance parfaite desobjets qui traversent le réseau. Toutefois, laplupart du trafic est chiffrée et du coup toutetechnique de caching ne peut pas fonctionner.Nous proposons un mécanisme qui permet auxISPs de faire du caching, bien qu’ils ne puissentobserver les objets envoyés. / Network caching can help copewith today Internet traffic explosion and sustainthe demand for an increasing user Quality ofExperience. Nonetheless, the techniquesproposed in the literature do not exploit all thepotential benefits. Indeed, they usually aim tooptimize hit ratio or other network-centricmetrics, e.g. path length, latency, etc., whilenetwork operators are more focused on moremore practical metrics, like cost and quality ofexperience. We devise caching techniques thatdirectly target the latter objectives and showthat this allows to gain better performance.More specifically, we first propose novelstrategies that reduce the Internet ServiceProvider (ISP) operational cost, bypreferentially caching the objects whose cost ofretrieval is the largest.We then focus on video delivery, since it is themost sensitive to QoE and represents most ofthe Internet traffic. Classic techniques ignorethat each video is represented by differentrepresentations, encoded at different bit-ratesand resolutions. We devise techniques that takethis into account.Finally, we point out that the techniquespresented in the literature assume the perfectknowledge of the objects that are crossing thenetwork. Nonetheless, most of the traffic todayis encrypted and thus caching techniques areinapplicable. To overcome this limit, Wepropose a mechanism which allows the ISPs tocache, even without knowing the objects being
|
22 |
Secure Data Service Outsourcing with Untrusted CloudXiong, Huijun 10 June 2013 (has links)
Outsourcing data services to the cloud is a nature fit for cloud usage. However, increasing security and privacy concerns from both enterprises and individuals on their outsourced data inhibit this trend. In this dissertation, we introduce service-centric solutions to address two types of security threats existing in the current cloud environments: semi-honest cloud providers and malicious cloud customers. Our solution aims not only to provide confidentiality and access controllability of outsourced data with strong cryptographic guarantee, but, more importantly, to fulfill specific security requirements from different cloud services with effective systematic ways.
To provide strong cryptographic guarantee to outsourced data, we study the generic security problem caused by semi-honest cloud providers and introduce a novel proxy-based secure data outsourcing scheme. Specifically, our scheme improves the efficiency of traditional proxy re-encryption algorithm by integrating symmetric encryption and proxy re-encryption algorithms. With less computation cost on applying re-encryption operation directly on the encrypted data, our scheme allows flexible and efficient user revocation without revealing underlying data and heavy computation in the untrusted cloud.
To address specific requirement from different cloud services, we investigate two specific cloud services: cloud-based content delivery service and cloud-based data processing service. For the former one, we focus on preserving cache property in the content delivery network and propose CloudSeal, a scheme for securely and flexibly sharing and distributing content via the public cloud. With the ability of caching the major part of a stored cipher content object in the delivery network for content distribution and keeping the minor part with the data owner for content authorization, CloudSeal achieves security and efficiency both theoretically and experimentally. For the later service, we design and realize CloudSafe, a framework that supports secure and efficient data processing with minimum key leakage in the vulnerable cloud virtualization environment. Through the adoption of one-time cryptographic key strategy and a centralized key management framework, CloudSafe efficiently avoids cross-VM side channel attack from malicious cloud customers in the cloud. Our experimental results confirm the practicality and scalability of CloudSafe. / Ph. D.
|
23 |
Algorithmic Mechanism Design for Data Replication ProblemsGuo, Minzhe 13 September 2016 (has links)
No description available.
|
24 |
Enhancing Location-Based Content Delivery Through Semi-Automated Generation of User ProfileLal, Neeraj January 2010 (has links)
No description available.
|
25 |
NetClust: A Framework for Scalable and Pareto-Optimal Media Server PlacementYin, H., Zhang, X., Zhan, T.Y., Zhang, Y., Min, Geyong, Wu, D.O. January 2013 (has links)
No / Effective media server placement strategies are critical for the quality and cost of multimedia services. Existing studies have primarily focused on optimization-based algorithms to select server locations from a small pool of candidates based on the entire topological information and thus these algorithms are not scalable due to unavailability of the small pool of candidates and low-efficiency of gathering the topological information in large-scale networks. To overcome this limitation, a novel scalable framework called NetClust is proposed in this paper. NetClust takes advantage of the latest network coordinate technique to reduce the workloads when obtaining the global network information for server placement, adopts a new Kappa -means-clustering-based algorithm to select server locations and identify the optimal matching between clients and servers. The key contribution of this paper is that the proposed framework optimizes the trade-off between the service delay performance and the deployment cost under the constraints of client location distribution and the computing/storage/bandwidth capacity of each server simultaneously. To evaluate the performance of the proposed framework, a prototype system is developed and deployed in a real-world large-scale Internet. Experimental results demonstrate that 1) NetClust achieves the lower deployment cost and lower delay compared to the traditional server selection method; and 2) NetClust offers a practical and feasible solution for multimedia service providers.
|
26 |
Confused by Path: Analysis of Path Confusion Based AttacksMirheidari, Seyed Ali 12 November 2020 (has links)
URL parser and normalization processes are common and important operations in different web frameworks and technologies. In recent years, security researchers have targeted these processes and discovered high impact vulnerabilities and exploitation techniques. In a different approach, we will focus on semantic disconnect among different framework-independent web technologies (e.g., browsers, proxies, cache servers, web servers) which results in different URL interpretations. We coined the term “Path Confusion” to represent this disagreement and this thesis will focus on analyzing enabling factors and security impact of this problem.In this thesis, we will show the impact and importance of path confusion in two attack classes including Style Injection by Relative Path Overwrite (RPO) and Web Cache Deception (WCD). We will focus on these attacks as case studies to demonstrate how utilizing path confusion techniques makes targeted sites exploitable. Moreover, we propose novel variations of each attack which would expand the number of vulnerable sites and introduce new attack scenarios. We will present instances which have been secured against these attacks, while being still exploitable with introduced Path Confusion techniques. To further elucidate the seriousness of path confusion, we will also present the large scale analysis results of RPO and WCD attacks on high profile sites. We present repeatable methodologies and automated path confusion crawlers which detect thousands of sites that are still vulnerable to RPO or WCD only with specific types of path confusion techniques. Our results attest the severity of path confusion based class of attacks and how extensively they could hit the clients or systems. We analyze some browser-based mitigation techniques for RPO and discuss that WCD cannot be dealt as a common vulnerability of each component; instead it arises when an ecosystem of individually impeccable components ends up in a faulty situation.
|
27 |
Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-NetzenWindisch, Gerd 17 March 2017 (has links) (PDF)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten.
In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
|
28 |
Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-NetzenWindisch, Gerd 02 February 2017 (has links)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten.
In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
|
29 |
Social network support for data delivery infrastructuresSastry, Nishanth Ramakrishna January 2011 (has links)
Network infrastructures often need to stage content so that it is accessible to consumers. The standard solution, deploying the content on a centralised server, can be inadequate in several situations. Our thesis is that information encoded in social networks can be used to tailor content staging decisions to the user base and thereby build better data delivery infrastructures. This claim is supported by two case studies, which apply social information in challenging situations where traditional content staging is infeasible. Our approach works by examining empirical traces to identify relevant social properties, and then exploits them. The first study looks at cost-effectively serving the ``Long Tail'' of rich-media user-generated content, which need to be staged close to viewers to control latency and jitter. Our traces show that a preference for the unpopular tail items often spreads virally and is localised to some part of the social network. Exploiting this, we propose Buzztraq, which decreases replication costs by selectively copying items to locations favoured by viral spread. We also design SpinThrift, which separates popular and unpopular content based on the relative proportion of viral accesses, and opportunistically spins down disks containing unpopular content, thereby saving energy. The second study examines whether human face-to-face contacts can efficiently create paths over time between arbitrary users. Here, content is staged by spreading it through intermediate users until the destination is reached. Flooding every node minimises delivery times but is not scalable. We show that the human contact network is resilient to individual path failures, and for unicast paths, can efficiently approximate flooding in delivery time distribution simply by randomly sampling a handful of paths found by it. Multicast by contained flooding within a community is also efficient. However, connectivity relies on rare contacts and frequent contacts are often not useful for data delivery. Also, periods of similar duration could achieve different levels of connectivity; we devise a test to identify good periods. We finish by discussing how these properties influence routing algorithms.
|
30 |
Implementace CDN a clusteringu v prostředí GNU/Linux s testy výkonnosti. / CDN and clustering in GNU/Linux with performance testingMikulka, Pavel January 2008 (has links)
Fault tolerance is essential in a production-grade service delivery network. One of the solution is build a clustered environment to keep system failure to a minimum. This thesis examines the use of high availability and load balancing services using open source tools in GNU/Linux. The thesis discusses some general technologies of high availability computing as virtualization, synchronization and mirroring. To build relatively cheap high availability clusters is suitable DRDB tool. DRDB is tool for build synchronized Linux block devices. This paper also examines Linux-HA project, Redhat Cluster Suite, LVS, etc. Content Delivery Networks (CDN) replicate content over several mirrored web servers strategically placed at various locations in order to deal with the flash crowds. A CDN has some combination a request-routing and replication mechanism. Thus CDNs offer fast and reliable applications and services by distributing content to cache servers located close to end-users. This work examines open-source CDNs Globule and CoralCDN and test performance of this CDNs in global deployment.
|
Page generated in 0.1093 seconds