• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 22
  • 17
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design and Evaluation of Enhanced Network Caching Systems to Improve Content Delivery in the Internet / Conception et évaluation de systèmes de caching de réseau pour améliorer la distribution des contenus sur Internet

Araldo, Andrea Giuseppe 07 October 2016 (has links)
Le caching de réseau peut aider àgérer l'explosion du trafic sur Internet et àsatisfaire la Qualité d'Expérience (QoE)croissante demandée par les usagers.Néanmoins, les techniques proposées jusqu'àprésent par la littérature scientifique n'arriventpas à exploiter tous les avantages potentiels. Lestravaux de recherche précédents cherchent àoptimiser le hit ratio ou d'autres métriques deréseau, tandis que les opérateurs de réseau(ISPs) sont plus intéressés à des métriques plusconcrètes, par exemple le coût et la qualitéd'expérience (QoE). Pour cela, nous visonsdirectement l'optimisation des métriquesconcrètes et montrons que, ce faisant, on obtientdes meilleures performances.Plus en détail, d'abord nous proposons desnouvelles techniques de caching pour réduire lecoût pour les ISPs en préférant stocker lesobjets qui sont les plus chères à repérer.Nous montrons qu'un compromis existe entre lamaximisation classique du hit ratio et laréduction du coût.Ensuite, nous étudions la distribution vidéo,comme elle est la plus sensible à la QoE etconstitue la plus part du trafic Internet. Lestechniques de caching classiques ignorent sescaractéristiques particulières, par exemple le faitqu'une vidéo est représentée par différentesreprésentations, encodées en différents bit-rateset résolutions. Nous introduisons des techniquesqui prennent en compte cela.Enfin, nous remarquons que les techniquescourantes assument la connaissance parfaite desobjets qui traversent le réseau. Toutefois, laplupart du trafic est chiffrée et du coup toutetechnique de caching ne peut pas fonctionner.Nous proposons un mécanisme qui permet auxISPs de faire du caching, bien qu’ils ne puissentobserver les objets envoyés. / Network caching can help copewith today Internet traffic explosion and sustainthe demand for an increasing user Quality ofExperience. Nonetheless, the techniquesproposed in the literature do not exploit all thepotential benefits. Indeed, they usually aim tooptimize hit ratio or other network-centricmetrics, e.g. path length, latency, etc., whilenetwork operators are more focused on moremore practical metrics, like cost and quality ofexperience. We devise caching techniques thatdirectly target the latter objectives and showthat this allows to gain better performance.More specifically, we first propose novelstrategies that reduce the Internet ServiceProvider (ISP) operational cost, bypreferentially caching the objects whose cost ofretrieval is the largest.We then focus on video delivery, since it is themost sensitive to QoE and represents most ofthe Internet traffic. Classic techniques ignorethat each video is represented by differentrepresentations, encoded at different bit-ratesand resolutions. We devise techniques that takethis into account.Finally, we point out that the techniquespresented in the literature assume the perfectknowledge of the objects that are crossing thenetwork. Nonetheless, most of the traffic todayis encrypted and thus caching techniques areinapplicable. To overcome this limit, Wepropose a mechanism which allows the ISPs tocache, even without knowing the objects being
22

Secure Data Service Outsourcing with Untrusted Cloud

Xiong, Huijun 10 June 2013 (has links)
Outsourcing data services to the cloud is a nature fit for cloud usage. However, increasing security and privacy concerns from both enterprises and individuals on their outsourced data inhibit this trend. In this dissertation, we introduce service-centric solutions to address two types of security threats existing in the current cloud environments: semi-honest cloud providers and malicious cloud customers. Our solution aims not only to provide confidentiality and access controllability of outsourced data with strong cryptographic guarantee, but, more importantly, to fulfill specific security requirements from different cloud services with effective systematic ways. To provide strong cryptographic guarantee to outsourced data, we study the generic security problem caused by semi-honest cloud providers and introduce a novel proxy-based secure data outsourcing scheme. Specifically, our scheme improves the efficiency of traditional proxy re-encryption algorithm by integrating symmetric encryption and proxy re-encryption algorithms. With less computation cost on applying re-encryption operation directly on the encrypted data, our scheme allows flexible and efficient user revocation without revealing underlying data and heavy computation in the untrusted cloud. To address specific requirement from different cloud services, we investigate two specific cloud services: cloud-based content delivery service and cloud-based data processing service. For the former one, we focus on preserving cache property in the content delivery network and propose CloudSeal, a scheme for securely and flexibly sharing and distributing content via the public cloud. With the ability of caching the major part of a stored cipher content object in the delivery network for content distribution and keeping the minor part with the data owner for content authorization, CloudSeal achieves security and efficiency both theoretically and experimentally. For the later service, we design and realize CloudSafe, a framework that supports secure and efficient data processing with minimum key leakage in the vulnerable cloud virtualization environment. Through the adoption of one-time cryptographic key strategy and a centralized key management framework, CloudSafe efficiently avoids cross-VM side channel attack from malicious cloud customers in the cloud. Our experimental results confirm the practicality and scalability of CloudSafe. / Ph. D.
23

Confused by Path: Analysis of Path Confusion Based Attacks

Mirheidari, Seyed Ali 12 November 2020 (has links)
URL parser and normalization processes are common and important operations in different web frameworks and technologies. In recent years, security researchers have targeted these processes and discovered high impact vulnerabilities and exploitation techniques. In a different approach, we will focus on semantic disconnect among different framework-independent web technologies (e.g., browsers, proxies, cache servers, web servers) which results in different URL interpretations. We coined the term “Path Confusion” to represent this disagreement and this thesis will focus on analyzing enabling factors and security impact of this problem.In this thesis, we will show the impact and importance of path confusion in two attack classes including Style Injection by Relative Path Overwrite (RPO) and Web Cache Deception (WCD). We will focus on these attacks as case studies to demonstrate how utilizing path confusion techniques makes targeted sites exploitable. Moreover, we propose novel variations of each attack which would expand the number of vulnerable sites and introduce new attack scenarios. We will present instances which have been secured against these attacks, while being still exploitable with introduced Path Confusion techniques. To further elucidate the seriousness of path confusion, we will also present the large scale analysis results of RPO and WCD attacks on high profile sites. We present repeatable methodologies and automated path confusion crawlers which detect thousands of sites that are still vulnerable to RPO or WCD only with specific types of path confusion techniques. Our results attest the severity of path confusion based class of attacks and how extensively they could hit the clients or systems. We analyze some browser-based mitigation techniques for RPO and discuss that WCD cannot be dealt as a common vulnerability of each component; instead it arises when an ecosystem of individually impeccable components ends up in a faulty situation.
24

Algorithmic Mechanism Design for Data Replication Problems

Guo, Minzhe 13 September 2016 (has links)
No description available.
25

Enhancing Location-Based Content Delivery Through Semi-Automated Generation of User Profile

Lal, Neeraj January 2010 (has links)
No description available.
26

NetClust: A Framework for Scalable and Pareto-Optimal Media Server Placement

Yin, H., Zhang, X., Zhan, T.Y., Zhang, Y., Min, Geyong, Wu, D.O. January 2013 (has links)
No / Effective media server placement strategies are critical for the quality and cost of multimedia services. Existing studies have primarily focused on optimization-based algorithms to select server locations from a small pool of candidates based on the entire topological information and thus these algorithms are not scalable due to unavailability of the small pool of candidates and low-efficiency of gathering the topological information in large-scale networks. To overcome this limitation, a novel scalable framework called NetClust is proposed in this paper. NetClust takes advantage of the latest network coordinate technique to reduce the workloads when obtaining the global network information for server placement, adopts a new Kappa -means-clustering-based algorithm to select server locations and identify the optimal matching between clients and servers. The key contribution of this paper is that the proposed framework optimizes the trade-off between the service delay performance and the deployment cost under the constraints of client location distribution and the computing/storage/bandwidth capacity of each server simultaneously. To evaluate the performance of the proposed framework, a prototype system is developed and deployed in a real-world large-scale Internet. Experimental results demonstrate that 1) NetClust achieves the lower deployment cost and lower delay compared to the traditional server selection method; and 2) NetClust offers a practical and feasible solution for multimedia service providers.
27

Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-Netzen

Windisch, Gerd 17 March 2017 (has links) (PDF)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten. In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
28

Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-Netzen

Windisch, Gerd 02 February 2017 (has links)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten. In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
29

Social network support for data delivery infrastructures

Sastry, Nishanth Ramakrishna January 2011 (has links)
Network infrastructures often need to stage content so that it is accessible to consumers. The standard solution, deploying the content on a centralised server, can be inadequate in several situations. Our thesis is that information encoded in social networks can be used to tailor content staging decisions to the user base and thereby build better data delivery infrastructures. This claim is supported by two case studies, which apply social information in challenging situations where traditional content staging is infeasible. Our approach works by examining empirical traces to identify relevant social properties, and then exploits them. The first study looks at cost-effectively serving the ``Long Tail'' of rich-media user-generated content, which need to be staged close to viewers to control latency and jitter. Our traces show that a preference for the unpopular tail items often spreads virally and is localised to some part of the social network. Exploiting this, we propose Buzztraq, which decreases replication costs by selectively copying items to locations favoured by viral spread. We also design SpinThrift, which separates popular and unpopular content based on the relative proportion of viral accesses, and opportunistically spins down disks containing unpopular content, thereby saving energy. The second study examines whether human face-to-face contacts can efficiently create paths over time between arbitrary users. Here, content is staged by spreading it through intermediate users until the destination is reached. Flooding every node minimises delivery times but is not scalable. We show that the human contact network is resilient to individual path failures, and for unicast paths, can efficiently approximate flooding in delivery time distribution simply by randomly sampling a handful of paths found by it. Multicast by contained flooding within a community is also efficient. However, connectivity relies on rare contacts and frequent contacts are often not useful for data delivery. Also, periods of similar duration could achieve different levels of connectivity; we devise a test to identify good periods. We finish by discussing how these properties influence routing algorithms.
30

Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming

Molina Moreno, Benjamin 02 September 2013 (has links)
Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica. / Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637 / TESIS

Page generated in 0.0839 seconds