• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 15
  • 13
  • 11
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Um processo de verificação e validação para os componentes do núcleo comum do middleware ginga

Caroca, Caio Regis 27 September 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:58Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2666511 bytes, checksum: ea015181db046234ba92f7c73f6c3a90 (MD5) Previous issue date: 2010-09-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Ginga is the official specification and standardized middleware for the Brazilian Digital TV System. The complexity demanded by the construction of this layer of software is high, which also increases the complexity of testing. The importance of software testing and its relationship with quality should be emphasized, since this type of system still has a high degree of complexity inherent to its development, mainly due to its specification is recent, and by proposing innovative features. Besides being considered a critical software, since failures in the implementation of middleware can compromise the success of Digital TV as a whole. The middleware is a key player within a Digital TV system since it is he who dictates the rules so that applications can be run on the platform. Thus, the correctness of middleware is of vital importance to enable interactive applications to run successfully. The project CDN Ginga (Ginga Code Development Network) is responsible for developing collaborative and distributed a reference version for PC Ginga. This implementation is based on software components and open to universities and companies. In this context, this paper proposes a process for verification and validation of middleware Ginga, to be deployed in parallel to the process of project development Ginga CDN, facing the common core components (Ginga-CC). For this purpose, we defined a set of tests, which aim to check the operation of middleware, as well as validate the different configurations of components, from middleware Ginga CDN generated by the network. / O middleware Ginga é a especificação oficial e padronizada de middleware para o Sistema Brasileiro de TV Digital. A complexidade demandada na construção dessa camada de software é alta, o que também aumenta a complexidade de se testar. A importância do teste de software e sua relação com a qualidade devem ser enfatizadas, visto que este tipo de sistema ainda possui alto grau de complexidade inerente ao seu desenvolvimento devido, principalmente, por sua especificação ainda ser recente, e por propor funcionalidades inovadoras. Além de ser considerado um software crítico, pois falhas na implementação do middleware podem comprometer o sucesso da TV Digital como um todo. O middleware é uma peça chave dentro de um sistema de TV Digital uma vez que é ele quem dita às regras para que as aplicações possam ser executadas na plataforma. Dessa forma, a corretude do middleware é de vital importância para permitir que as aplicações interativas sejam executadas com sucesso. O projeto Ginga CDN (Ginga Code Development Network) é responsável pelo desenvolvimento colaborativo e distribuído de uma versão de referência para PC do middleware Ginga. Essa implementação é baseada em componentes de software e aberta para universidades e empresas. Neste contexto, este trabalho propõe um processo para verificação e validação do middleware Ginga, para ser implantado em paralelo ao processo de desenvolvimento do projeto Ginga CDN, voltado para componentes do núcleo comum (Ginga-CC). Para tanto, foram definidos um conjunto de testes, os quais visam verificar o funcionamento do middleware, bem como, validar as diferentes configurações de componentes, desde middleware, geradas pela rede Ginga CDN.
22

Adaptivitätssensitive Platzierung von Replikaten in Adaptiven Content Distribution Networks

Buchholz, Sven 08 July 2005 (has links)
Adaptive Content Distribution Networks (A-CDNs) sind anwendungsübergreifende, verteilte Infrastrukturen, die auf Grundlage verteilter Replikation von Inhalten und Inhaltsadaption eine skalierbare Auslieferung von adaptierbaren multimedialen Inhalten an heterogene Clients ermöglichen. Die Platzierung der Replikate in den Surrogaten eines A-CDN wird durch den Platzierungsmechanismus des A-CDN gesteuert. Anders als in herkömmlichen CDNs, die keine Inhaltsadaption berücksichtigen, muss ein Platzierungsmechanismus in einem A-CDN nicht nur entscheiden, welches Inhaltsobjekt in welchem Surrogat repliziert werden soll, sondern darüber hinaus, in welcher Repräsentation bzw. in welchen Repräsentationen das Inhaltsobjekt zu replizieren ist. Herkömmliche Platzierungsmechanismen sind nicht in der Lage, verschiedene Repräsentationen eines Inhaltsobjektes zu berücksichtigen. Beim Einsatz herkömmlicher Platzierungsmechanismen in A-CDNs können deshalb entweder nur statisch voradaptierte Repräsentationen oder ausschließlich generische Repräsentationen repliziert werden. Während bei der Replikation von statisch voradaptierten Repräsentationen die Wiederverwendbarkeit der Replikate eingeschränkt ist, führt die Replikation der generischen Repräsentationen zu erhöhten Kosten und Verzögerungen für die dynamische Adaption der Inhalte bei jeder Anfrage. Deshalb werden in der Arbeit adaptivitätssensitive Platzierungsmechanismen zur Platzierung von Replikaten in A-CDNs vorgeschlagen. Durch die Berücksichtigung der Adaptierbarkeit der Inhalte bei der Ermittlung einer Platzierung von Replikaten in den Surrogaten des A-CDNs können adaptivitätssensitive Platzierungsmechanismen sowohl generische und statisch voradaptierte als auch teilweise adaptierte Repräsentationen replizieren. Somit sind sie in der Lage statische und dynamische Inhaltsadaption flexibel miteinander zu kombinieren. Das Ziel der vorliegenden Arbeit ist zu evaluieren, welche Vorteile sich durch die Berücksichtigung der Inhaltsadaption bei Platzierung von adaptierbaren Inhalten in A-CDNs realisieren lassen. Hierzu wird das Problem der adaptivitätssensitiven Platzierung von Replikaten in A-CDNs als Optimierungsproblem formalisiert, Algorithmen zur Lösung des Optimierungsproblems vorgeschlagen und diese in einem Simulator implementiert. Das zugrunde liegende Simulationsmodell beschreibt ein im Internet verteiltes A-CDN, welches zur Auslieferung von JPEG-Bildern an heterogene mobile und stationäre Clients verwendet wird. Anhand dieses Simulationsmodells wird die Leistungsfähigkeit der adaptivitätssensitiven Platzierungsmechanismen evaluiert und mit der von herkömmlichen Platzierungsmechanismen verglichen. Die Simulationen zeigen, dass der adaptivitätssensitive Ansatz in Abhängigkeit vom System- und Lastmodell sowie von der Speicherkapazität der Surrogate im A-CDN in vielen Fällen Vorteile gegenüber dem Einsatz herkömmlicher Platzierungsmechanismen mit sich bringt. Wenn sich die Anfragelasten verschiedener Typen von Clients jedoch nur wenig oder gar nicht überlappen oder bei hinreichend großer Speicherkapazität der Surrogate hat der adaptivitätssensitive Ansatz keine signifikanten Vorteile gegenüber dem Einsatz eines herkömmlichen Platzierungsmechanismus. / Adaptive Content Distribution Networks (A-CDNs) are application independent, distributed infrastructures using content adaptation and distributed replication of contents to allow the scalable delivery of adaptable multimedia contents to heterogeneous clients. The replica placement in an A-CDN is controlled by the placement mechanisms of the A-CDN. As opposed to traditional CDNs, which do not take content adaptation into consideration, a replica placement mechanism in an A-CDN has to decide not only which object shall be stored in which surrogate but also which representation or which representations of the object to replicate. Traditional replica placement mechanisms are incapable of taking different representations of the same object into consideration. That is why A-CDNs that use traditional replica placement mechanisms may only replicate generic or statically adapted representations. The replication of statically adapted representations reduces the sharing of the replicas. The replication of generic representations results in adaptation costs and delays with every request. That is why the dissertation thesis proposes the application of adaptation-aware replica placement mechanisms. By taking the adaptability of the contents into account, adaptation-aware replica placement mechanisms may replicate generic, statically adapted and even partially adapted representations of an object. Thus, they are able to balance between static and dynamic content adaptation. The dissertation is targeted at the evaluation of the performance advantages of taking knowledge about the adaptability of contents into consideration when calculating a placement of replicas in an A-CDN. Therefore the problem of adaptation-aware replica placement is formalized as an optimization problem; algorithms for solving the optimization problem are proposed and implemented in a simulator. The underlying simulation model describes an Internet-wide distributed A-CDN that is used for the delivery of JPEG images to heterogeneous mobile and stationary clients. Based on the simulation model, the performance of the adaptation-aware replica placement mechanisms are evaluated and compared to traditional replica placement mechanisms. The simulations prove that the adaptation-aware approach is superior to the traditional replica placement mechanisms in many cases depending on the system and load model as well as the storage capacity of the surrogates of the A-CDN. However, if the load of different types of clients do hardly overlap or with sufficient storage capacity within the surrogates, the adaptation-aware approach has no significant advantages over the application of traditional replica-placement mechanisms.
23

Modeling performance of internet-based services using causal reasoning

Tariq, Muhammad Mukarram Bin 06 April 2010 (has links)
The performance of Internet-based services depends on many server-side, client-side, and network related factors. Often, the interaction among the factors or their effect on service performance is not known or well-understood. The complexity of these services makes it difficult to develop analytical models. Lack of models impedes network management tasks, such as predicting performance while planning for changes to service infrastructure, or diagnosing causes of poor performance. We posit that we can use statistical causal methods to model performance for Internet-based services and facilitate performance related network management tasks. Internet-based services are well-suited for statistical learning because the inherent variability in many factors that affect performance allows us to collect comprehensive datasets that cover service performance under a wide variety of conditions. These conditional distributions represent the functions that govern service performance and dependencies that are inherent in the service infrastructure. These functions and dependencies are accurate and can be used in lieu of analytical models to reason about system performance, such as predicting performance of a service when changing some factors, finding causes of poor performance, or isolating contribution of individual factors in observed performance. We present three systems, What-if Scenario Evaluator (WISE), How to Improve Performance (HIP), and Network Access Neutrality Observatory (NANO), that use statistical causal methods to facilitate network management tasks. WISE predicts performance for what-if configurations and deployment questions for content distribution networks. For this, WISE learns the causal dependency structure among the latency-causing factors, and when one or more factors is changed, WISE estimates effect on other factors using the dependency structure. HIP extends WISE and uses the causal dependency structure to invert the performance function, find causes of poor performance, and help answers questions about how to improve performance or achieve performance goals. NANO uses causal inference to quantify the impact of discrimination policies of ISPs on service performance. NANO is the only tool to date for detecting destination-based discrimination techniques that ISPs may use. We have evaluated these tools by application to large-scale Internet-based services and by experiments on wide-area Internet. WISE is actively used at Google for predicting network-level and browser-level response time for Web search for new datacenter deployments. We have used HIP to find causes of high-latency Web search transactions in Google, and identified many cases where high-latency transactions can be significantly mitigated with simple infrastructure changes. We have evaluated NANO using experiments on wide-area Internet and also made the tool publicly available to recruit users and deploy NANO at a global scale.
24

Social network support for data delivery infrastructures

Sastry, Nishanth Ramakrishna January 2011 (has links)
Network infrastructures often need to stage content so that it is accessible to consumers. The standard solution, deploying the content on a centralised server, can be inadequate in several situations. Our thesis is that information encoded in social networks can be used to tailor content staging decisions to the user base and thereby build better data delivery infrastructures. This claim is supported by two case studies, which apply social information in challenging situations where traditional content staging is infeasible. Our approach works by examining empirical traces to identify relevant social properties, and then exploits them. The first study looks at cost-effectively serving the ``Long Tail'' of rich-media user-generated content, which need to be staged close to viewers to control latency and jitter. Our traces show that a preference for the unpopular tail items often spreads virally and is localised to some part of the social network. Exploiting this, we propose Buzztraq, which decreases replication costs by selectively copying items to locations favoured by viral spread. We also design SpinThrift, which separates popular and unpopular content based on the relative proportion of viral accesses, and opportunistically spins down disks containing unpopular content, thereby saving energy. The second study examines whether human face-to-face contacts can efficiently create paths over time between arbitrary users. Here, content is staged by spreading it through intermediate users until the destination is reached. Flooding every node minimises delivery times but is not scalable. We show that the human contact network is resilient to individual path failures, and for unicast paths, can efficiently approximate flooding in delivery time distribution simply by randomly sampling a handful of paths found by it. Multicast by contained flooding within a community is also efficient. However, connectivity relies on rare contacts and frequent contacts are often not useful for data delivery. Also, periods of similar duration could achieve different levels of connectivity; we devise a test to identify good periods. We finish by discussing how these properties influence routing algorithms.
25

Implementace CDN a clusteringu v prostředí GNU/Linux s testy výkonnosti. / CDN and clustering in GNU/Linux with performance testing

Mikulka, Pavel January 2008 (has links)
Fault tolerance is essential in a production-grade service delivery network. One of the solution is build a clustered environment to keep system failure to a minimum. This thesis examines the use of high availability and load balancing services using open source tools in GNU/Linux. The thesis discusses some general technologies of high availability computing as virtualization, synchronization and mirroring. To build relatively cheap high availability clusters is suitable DRDB tool. DRDB is tool for build synchronized Linux block devices. This paper also examines Linux-HA project, Redhat Cluster Suite, LVS, etc. Content Delivery Networks (CDN) replicate content over several mirrored web servers strategically placed at various locations in order to deal with the flash crowds. A CDN has some combination a request-routing and replication mechanism. Thus CDNs offer fast and reliable applications and services by distributing content to cache servers located close to end-users. This work examines open-source CDNs Globule and CoralCDN and test performance of this CDNs in global deployment.
26

A Large Scale Assessment of DNS Resolution Services

Kernan, Nicholas 26 May 2023 (has links)
No description available.
27

La gestion du trafic dans les réseaux orientés contenus

Benkirane, Nada 07 March 2014 (has links) (PDF)
Les réseaux orientés contenus (CCN) ont été créés afin d'optimiser les ressources réseau et assurer une plus grande sécurité. Le design et l'implémentation de cette architecture est encore à ces débuts. Ce travail de thèse présente des propositions pour la gestion de trafic dans les réseaux du future.Il est nécessaire d'ajouter des mécanismes de contrôle concernant le partage de la bande passante entre flots. Le contrôle de trafic est nécessaire pour assurer un temps de latence faible pour les flux de streaming vidéo ou audio, et pour partager équitablement la bande passante entre flux élastiques. Nous proposons un mécanisme d'Interest Discard pour les réseaux CCN afin d?optimiser l'utilisation de la bande passante. Les CCN favorisant l'utilisation de plusieurs sources pour télécharger un contenu, nous étudions les performances des Multipaths/ Multisources; on remarque alors que leurs performances dépendent des performances de caches.Dans la deuxième partie de cette thèse, nous évaluons les performances de caches en utilisant une approximation simple et précise pour les caches LRU. Les performances des caches dépendent fortement de la popularité des objets et de la taille des catalogues. Ainsi, Nous avons évalué les performances des caches en utilisant des popularités et des catalogues représentant les données réelles échangées sur Internet. Aussi, nous avons observé que les tailles de caches doivent être très grandes pour assurer une réduction significative de la bande passante; ce qui pourrait être contraignant pour l'implémentation des caches dans les routeurs.Nous pensons que la distribution des caches devrait répondre à un compromis bande passante/mémoire; la distribution adoptée devrait réaliser un coût minimum. Pour ce faire, nous évaluons les différences de coût entre architectures.
28

Disponibilidade de conteúdo em sistemas CDN assistidos por redes P2P

Oliveira, Jhonathan Araújo 24 September 2013 (has links)
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-06-18T14:20:29Z No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-19T21:10:55Z (GMT) No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-19T21:12:04Z (GMT) No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Made available in DSpace on 2015-06-19T21:12:04Z (GMT). No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) Previous issue date: 2013-09-24 / Scalability and high demand for resources are the main challenges that content providers face in multimedia applications based on networks. For instance, YouTube is one of the most popular delivery systems video on demand, the users send 100 hours of video every minute to its servers and more than four billion hours of video are watched every month. The CDN-P2P systems is widely recognized as a scalable alternative for multimedia content delivery in the Internet. In these systems, the peers from a peerto- peer network (P2P) share their resources thus reducing the demands on the network infrastructure for content delivery (CDN). Moreover, the CDN server to guarantee the availability of content when the peers contributions are limited by the churn, or when the content is is unprecedented to the peers of the P2P network. However, CDN-P2P systems alone do not guarantee the effectiveness of the services, since the peers output that are the only holders of a particular contents can generate congestion around the CDN server and degrade the quality of users experience. This dissertation investigates the contribution of stable peers for availability content on the on the P2P slice form a CDN-P2P system designed to distribute videos similar to those distributed by YouTube. In this way, the data were collected from the real YouTube Web site, exploring the potential of the users that have been access the playlists to characterize the stability of the peers in the system. The assumption about the effectiveness of playlists viewers int the availability content, due to the increased of the stay connnected of these users in the system and the possible popularity of contents shared by them. It was found that when a large number of the peers players pairs of the playlists spend long sessions connected, the improvement of the availability of content was in 60%. Additionally, in scenarios of low participation of players in playlists, its was the improvement outperformed in 20%. Furthermore, we evaluated how the build policies of the mesh impact the distribution system when the peers are grouped and identified as ordinaries and stable. These policies structure the portion of the P2P system through the criterias that are employed on arrival, maintenance and management of the connections of the peers, thus reducing the demands on the CDN server. / A escalabilidade e a alta demanda por recursos são os principais desafios que os provedores de conteúdo enfrentam na viabilização de aplicações multimídia baseadas em redes. No YouTube, por exemplo, um dos mais populares sistemas de distribuição de vídeo sob demanda, são enviadas 100 horas de vídeo a cada minuto aos seus servidores e mais de quatro bilhões de horas de vídeo são assistidas a cada mês. Sistemas CDN-P2P têm sido apontados como uma alternativa escalável para distribuição de conteúdo multimídia na Internet. Nesses sistemas, os pares da rede par-a-par (P2P) compartilham seus recursos, diminuindo as demandas sobre a infraestrutura da rede de distribuição de conteúdo (CDN). Por outro lado, os servidores da CDN garantem a disponibilidade de conteúdo quando as contribuições dos pares são limitadas pelo churn, ou quando o conteúdo for inédito aos pares da rede P2P. Contudo, sistemas CDN-P2P, por si só, não garantem a efetividade dos serviços, visto que a saída de pares que são os únicos detentores de um determinado conteúdo pode gerar congestionamento ao redor do servidor da CDN e degradar a qualidade de experiência dos usuários. Nesta dissertação investiga-se a contribuição de pares estáveis para disponibilidade de conteúdo na parte P2P de um sistema CDN-P2P concebido para distribuir vídeos similares aos distribuídos pelo YouTube. Para isso, dados reais foram coletados do site YouTube, explorando-se o potencial de usuários que acessam playlists para caracterizar a estabilidade dos pares no sistema. A suposição acerca da efetividade dos tocadores de playlists na disponibilidade de conteúdo deve-se Ao maior tempo de permanência desses usuários no sistema e á possível popularidade dos conteúdos por eles compartilhados. Verificou-se que quando um número grande de pares tocadores de playlists passam longas sessões conectados, a melhoria na disponibilidade de conteúdo foi de 60%. Adicionalmente em cenários de baixa participação dos tocadores de playlists a melhoria superou 20%. Em seguida, avaliou-se de que forma políticas de formação da malha impactam o sistema de distribuição estudado quando os pares são agrupados e identificados como estáveis e comuns. Estas políticas estruturam a porçãoP2P do sistema através de critérios que são empregados na chegada, na manutenção e gerência das conexões dos pares, diminuindo assim as exigências sobre o servidor da CDN.
29

Diffusion de flots vidéos dans des réseaux sous-provisionnés

LIU, Jiayi 04 November 2013 (has links) (PDF)
The proliferation of new devices (such as smartphones and tablets) promotes new multimedia services (e.g. user-generated live video broadcasting), as well as new streaming techniques (e.g. rate-adaptive streaming). As a matter of fact, scientists observe a formidable, sustainable growth of Internet traffic related to video streaming. Yet, network infrastructures struggle to cope with this growth and it is now frequent that a delivery network is insufficiently provisioned. Such underprovisioning problem is more severe for live videos due to its real-time requirement. In this thesis, we focus on bandwidth efficient video delivery solutions for live streaming in underprovisioned video delivery networks. Specifically, we have two main contributions: (1) a user-generated live videos sharing system based on peer-to-peer (P2P) technique, and (2) a live rate-adaptive streaming system based on Content Delivery Network (CDN). First of all, we built an multioverlay P2P video sharing system which allows Internet users to broadcast their own live videos. Typically, such a system consists of multiple P2P live video streaming systems, and faces the problem of finding a suitable allocation of peer upload bandwidth. We designed various bandwidth allocation algorithms for this problem and showed how optimal solutions can be efficiently computed. Then, we studied the problem of delivering live rate-adaptive streams in the CDN. We identified a discretize streaming model for multiple live videos in modern CDNs. We formulated a general optimization problem through Integer Linear Programming (ILP) and showed that it is NP-complete. Further, we presented a fast, easy to implement, and near-optimal algorithm with approved approximation ratios for a specific scenario. This work is the first step towards streaming multiple live rate-adaptive videos in CDN and provides a fundamental theoretical basis for deeper investigation. Last, we further extended the discretized streaming model into an user-centric one which maximizes the overall satisfaction of an user population. Further, we presented a practical system, which efficiently utilizes CDN infrastructure to deliver live video streams to viewers in dynamic and large-scale CDNs. The benefits of our approaches on reducing the CDN infrastructure capacity is validated through a set of realistic trace-driven large-scale simulations. All in one, this thesis explores bandwidth efficient live video delivery solutions in underprovisioned delivery network for multiple streaming technologies. The aim is to maximally utilize the bandwidth of relay nodes (peers in P2P and forwarding equipments in CDN) to achieve an optimization goal.
30

Design and prototyping of temperature resilient clock distribution networks

Natu, Nitish Umesh 22 May 2014 (has links)
Clock Distribution Networks play a vital role in performance and reliability of a system. However, temperature gradients observed in 3D ICs hamper the functionality of CDNs in terms of varying skew and propagation delay. This thesis presents two compensation techniques, Adaptive Voltage and Controllable Delay, to overcome these problems. The compensation methods are validated using a FPGA-based test vehicle. Modification in traditional buffer design are also presented and the performance as well as the area and power overhead of both the implementations is compared.

Page generated in 0.225 seconds