• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 34
  • 23
  • 10
  • 9
  • 8
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 280
  • 79
  • 58
  • 55
  • 47
  • 42
  • 42
  • 40
  • 35
  • 32
  • 26
  • 26
  • 25
  • 23
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Future internet architecture to structure and to manage dynamic autonomous systems, internet service providers and customers

Oliveira, Luciana Pereira 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:51:58Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Diversos trabalhos na área de redes dinâmicas têm sido propostos na literatura com o objetivo de prover à arquitetura da Internet o suporte à mobilidade. O problema dessas redes instáveis na Internet consiste em oferecer um conjunto de mecanismo, tais como endereçamento, gerenciamento da informação e encaminhamento da informação, que suportem informação e entidade (Sistema Autônomo, Provedor de Serviços na Internet e Clientes) móveis. Nesse contexto, alguns trabalhos para arquitetura da Internet têm proposto uma maneira de separar a localização (atualmente o IP) e o nome identificador, devido ao forte relacionamento entre o IP e o nome. Em geral, eles propõem uma abordagem de roteamento na camada overlay para separar essas informações. Outros trabalhos acreditam que este desacoplamento não é suficiente para solucionar os problemas de mobilidade, desde que a dinamicidade gera muitas mensagens de controle e atualizações do vínculo entre o IP e o nome. Por essa razão, os pesquisadores também têm proposto novos modelos para gerenciar a camada overlay. Uma das contribuições deste trabalho é a proposta de uma solução para arquitetura da Internet denominada Stable Society que adota a abordagem de papéis. Um papel é uma unidade funcional que é utilizada para organizar a comunicação. Um importante diferencial da proposta é que além de desvincular o nome e a localização, ela também oferece soluções para os problemas relacionados a estruturação e manutenção da camada overlay. Além disso, este trabalho define quatro papéis: o mensageiro encaminha os dados dentro da sociedade; o guarda é a entidade mais estável para encaminhar mensagens entre as sociedades; o operário armazena informações; e o líder estrutura e gerencia a rede overlay. Reduzindo o escopo para a implementação desta dissertação de mestrado, o mensageiro e o guarda foram considerados como a camada de rede sem distinção de estabilidade, desde que o fornecimento de um mecanismo de gerenciamento do overlay de roteamento foi o objetivo do trabalho. Portanto, como prova do conceito apresentado pela proposta, os líderes e operários foram implementados, porque eles agem de maneira independente de tecnologia de acesso e são fundamentais para solucionar o problema da instabilidade nos processos de armazenamento e descoberta da informação. Como resultado, um novo algoritmo denominado Stable Society model over Distributes Hash Table (SSDHT) foi proposto. Além disso, este algoritmo foi comparado com outras soluções DHT (Chord). Os resultados mostraram que o SSDHT é um bom algoritmo, principalmente quando se aumenta a instabilidade (carga do tráfego, grau de mobilidade e tamanho da rede). Por exemplo, a taxa de mensagens entregue com sucesso foi acima de 90% quando a carga de tráfego, o grau de mobilidade e o tamanho da rede foram variados
22

Estudo comparativo de técnicas de restauração de caminhos em redes de serviços

Jerônimo, Klarissa de Souza January 2005 (has links)
Made available in DSpace on 2014-06-12T16:00:58Z (GMT). No. of bitstreams: 2 arquivo7120_1.pdf: 886001 bytes, checksum: 31e8efcb46de8c59822494479b8af868 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005 / A comunicação entre dois pontos (enlace) de uma rede de computadores, como na Internet, pode falhar. Para se tentar manter a comunicação, utilizam-se técnicas de restauração, via outros enlaces, formando um novo caminho. Essa restauração representa um aumento de custo em relação ao caminho original. Em uma rede de serviços, há uma série de serviços distribuídos entre os nós. Na realização de uma tarefa, que é uma seqüência de serviços, o caminho que liga o nó de origem ao nó destino é formado por uma seqüência de enlaces e de nós, devendo conter essa seqüência de serviços. Os nós intermediários não importam, apenas a seqüência de serviços e o custo do caminho. A criação de uma técnica de restauração para redes de serviços mostrou-se necessária, pois as técnicas de restauração de rede não consideram a possibilidade de se seguir por um outro caminho, capaz da realização da tarefa, contendo outros nós e não apenas os nós do caminho original. Esta dissertação tem como contribuição a análise comparativa de três técnicas de restauração: rede, local e total. A de rede é a usual, religando os mesmos nós do caminho via enlaces que não falharam. As duas últimas baseiam-se em um mapa que representa os serviços da rede e os nós onde eles se encontram. A técnica local considera a restauração a partir do ponto antes da falha. A total recria um novo caminho para todos os serviços, podendo ainda incluir partes do caminho original. A técnica de restauração total mostrou-se melhor do que a de rede em dois pontos: teve um aumento do custo de 10% contra de 20 a 50%, dependendo do número de nós; e independência do aumento de custo com o aumento do número de nós, indicando ser melhor adaptável para grandes redes. Estas comparações indicam ainda que a técnica de rede pode ser utilizada para a restauração de aplicações de curta duração ou que não tenham a rede como seu maior gargalo
23

Analysis and Implementation of Topology-Aware Overlay Systems on the Internet

Ren, Shansi 22 October 2009 (has links)
No description available.
24

Sustainable and durable bridge decks

Shearrer, Andrew Joseph January 1900 (has links)
Master of Science / Department of Civil Engineering / Robert J. Peterman / Epoxy polymer overlays have been used for decades on existing bridge decks to protect the deck and extend its service life. The polymer overlay’s ability to seal a bridge deck is now being specified for new construction. Questions exist about the amount of drying time needed to achieve an acceptable concrete moisture content to ensure an adequate bond to the polymer overlay. Current Kansas Department of Transportation (KDOT) specifications for new bridge decks call a 14 day wet curing period followed by 21 days of drying (Kansas DOT, 2007) If not enough drying is provided, the moisture within the concrete can form water vapor pressure at the overlay interface and induce delamination. If too much drying time is provided projects are delayed, which can increase the total project cost or even delay overlay placement until the next spring. A testing procedure was developed to simulate a bridge deck in order to test the concrete moisture content and bonding strength of the overlay. Concrete slabs were cast to test typical concrete and curing conditions for a new bridge deck. Three concrete mixtures were tested to see what effect the water –cement ratio and the addition of fly ash might have on the overlay bond strength. Wet curing occurred at 3 different temperatures (40°F, 73°F, and 100°F) to see if temperature played a part in the bond strength as well. The concrete was then allowed to dry for 3, 7, 14, or 21 days. Five epoxy-polymer overlay systems that had been preapproved by KDOT were each used in conjunction with the previously mentioned concrete and curing conditions. After, the slabs were setup to perform pull-off tests to test the tensile rupture strength. The concrete slabs with the different epoxy overlays were heated to 122-125°F to replicate summer bridge deck temperatures. Half of the pull-off tests were performed when the slabs were heated and half were performed once the slabs had cooled back down to 73°±5°F. Results from the pull-off tests as well as results from a moisture meter taken on the concrete prior to the overlay placement were compared and analyzed. Testing conditions were compared with each other to see which had a larger effect on the epoxy polymer overlay’s bond strength.
25

Implementação de um gerenciador de redes overlay para o GridSim / Implementation of an overlay network manager for GridSim

Sabatine, Ricardo José 11 November 2010 (has links)
Computação em grade tem se estabelecido como um importante paradigma de computação, por permitir lidar com grandes quantidades de cálculos e dados e a colaboração de participantes geograficamente distribuídos. Esses sistemas devem ser organizados de forma completamente distribuídas, com cada participante mantendo informações sobre outros participantes, e as informações necessárias ao funcionamento do sistema circulando pela rede de overlay resultante. Quando novas propostas de algoritmos, protocolos ou infraestruturas para a grade são apresentadas, sua avaliação efetiva implica considerar sua operação com uma grande quantidade de participantes, o que invariavelmente significa que simulações devem ser realizadas. Este trabalho apresenta um sub-sistema de simulação de redes de overlay integrado à plataforma de simulação de computação de grade GridSim, de forma a facilitar o estudo desse tipo de estruturas e o desenvolvimento de novas propostas de protocolos e algoritmos para seu uso em grades de computadores. A metodologia adotada resultou no desenvolvimento de um Java package no GridSim com classes e interfaces que representam os conceitos básicos de redes de overlay e da interface dos clientes com essas redes. A partir dele foi possível desenvolver protocolos para redes estruturadas e não estruturadas no simulador e simulá-los utilizando cenários de grade de dados. Com os resultados obtidos foi possível observar que, os protocolos implementados no simulador estão de acordo com o que é encontrado na literatura. / Grid Computing has been established as an important computing paradigm, since it allows dealing with a large quantity of computations and data and the collaboration of geographically distributed participants. Those systems must be organized in a completely distributed way, with each participant knowing about some other participants, and the needed information to the functioning system circulating through the resulting overlay network. When new algorithm proposals, protocols or infrastructures to the grid are presented, its evaluation implies to consider its operation with a large number of participants, which invariably means that simulations must be done. This work presents a subsystem of overlay network simulation integrated to the GridSim simulation platform, in order to facilitate the study of that type of structures and the development of new protocols and algorithms for use in grid computers. The adopted methodology led up to the development of a Java package with classes and interfaces that represent the basic concepts of overlay networks and of the clients interface with those networks. Using this package, it was possible for develop protocols to structured and non-structured networks in the simulator and simulate them using data grid scenarios. With the obtained results it was possible to observe that the implemented protocols in the simulator agree with what is found in the literature.
26

Dynamic Composition of Service Specific Overlay Networks

Al Ridhawi, Yousif 09 April 2013 (has links)
Content delivery through service overlay networks has gained popularity due to the overlays’ abilities to provide effective and reliable services. Inefficiencies of one-to-one matching of user requirements to a single service have given rise to service composition. Customized media delivery can be achieved through dynamic compositions of Service Specific Overlay Networks (SSONs). However, the presence of SSONs in dynamic environments raises the possibility of unexpected failures and quality degradations. Thus constructing, managing, and repairing corrupted service paths are challenging dilemmas. This thesis investigates the problem of autonomous SSON construction and management and identifies the drawbacks of current approaches. A novel multi-layered, autonomous, self-adaptive framework for constructing SSONs is presented. The framework includes a Hybrid Service Overlay Network layer (H-SON). The H-SON is a dynamic hybrid overlay dedicated to service composition for multimedia delivery in dynamic networks. Node placement in the overlay depends on the node’s stability, types and quality of provided services. Changes in stability and QoS of service nodes are reflected by dynamic re-organizations of the overlay. The H-SON permits fast and efficient searches for component services that meet client functional and quality expectations. Self-managed overlay nodes coordinate their behaviors to formulate a service composition path that meets the client’s requirements. Two approaches are presented in this work. The first illustrates how SSONs are established through dynamically adaptable MS-designed plans. The imprecise nature of nonfunctional service characteristics, such as QoS, is modeled using a fuzzy logic system. Moreover, semantic similarity evaluations enable us to include, in compositions, those services whose operations match, semantically, the requirements of the composition plan. Plan-based composition solutions restrict service discovery to defined abstract models. Our second composition approach introduces a semantic similarity and nearness SSON composition method. The objective is to free service nodes from the adherence to restrictive composition plans. The presented work illustrates a service composition solution that semantically advances service composition paths towards meeting users’ needs with each service hop while simultaneously guaranteeing user-acceptable QoS levels. Simulation results showcase the effectiveness of the presented work. Gathered results validate the success of our service composition methods while meeting user requirements.
27

Throughput and Fairness Considerations in Overlay Networks for Content Distribution

Karbhari, Pradnya 26 August 2005 (has links)
The Internet has been designed as a best-effort network, which does not provide any additional services to applications using the network. Overlay networks, which form an application layer network on top of the underlying Internet, have emerged as popular means to provide specific services and greater control to applications. Overlay networks offer a wide range of services, including content distribution, multicast and multimedia streaming. In my thesis, I focus on overlay networks for content distribution, used by applications such as bulk data transfer, file sharing and web retrieval. I first investigate the construction of such overlay networks by studying the bootstrapping functionality in an example network (the Gnutella peer-to-peer system). This study comprises the analysis and performance measurements of Gnutella servents and measurement of the GWebCache system that helps new peers find existing peers on the Gnutella network. Next, I look at fairness issues due to the retrieval of data at a client in the form of multipoint-to-point sessions, formed due to the use of content distribution networks. A multipoint-to-point session comprises multiple connections from multiple servers to a single client over multiple paths, initiated to retrieve a single application-level object. I investigate fairness of rate allocation from a session point of view, and propose fairness definitions and algorithms to achieve these definitions. Finally, I consider the problem of designing an overlay network for content distribution, which is fair to competing overlay networks, while maximizing the total end-to-end throughput of the data it carries. As a first step, I investigate this design problem for a single path in an Overlay-TCP network. I propose two schemes that dynamically provision the number of TCP connections on each hop of an Overlay-TCP path to maximize the end-to-end throughput using few extraneous connections. Next, I design an Overlay-TCP network, with the secondary goal of intra-overlay network fairness. I propose four schemes for deciding the number of TCP connections to be used on each overlay hop. I show that one can vary the proportion of sharing between competing overlay networks by varying the maximum number of connections allowed on overlay hops in each competing network.
28

Facilitating the provision of auxiliary support services for overlay networks

Demirci, Mehmet 20 September 2013 (has links)
Network virtualization and overlay networks have emerged as powerful tools for improving the flexibility of the Internet. Overlays are used to provide a wide range of useful services in today's networking environment, and they are also viewed as important building blocks for an agile and evolvable future Internet. Regardless of the specific service it provides, an overlay needs assistance in several areas in order to perform properly throughout its existence. This dissertation focuses on the mechanisms underlying the provision of auxiliary support services that perform control and management functions for overlays, such as overlay assignment, resource allocation, overlay monitoring and diagnosis. The priorities and objectives in the design of such mechanisms depend on network conditions and the virtualization environment. We identify opportunities for improvements that can help provide auxiliary services more effectively at different overlay life stages and under varying assumptions. The contributions of this dissertation are the following: 1. An overlay assignment algorithm designed to improve an overlay's diagnosability, which is defined as its property to allow accurate and low-cost fault diagnosis. The main idea is to increase meaningful sharing between overlay links in a controlled manner in order to help localize faults correctly with less effort. 2. A novel definition of bandwidth allocation fairness in the presence of multiple resource sharing overlays, and a routing optimization technique to improve fairness and the satisfaction of overlays. Evaluation analyzes the characteristics of different fair allocation algorithms, and suggests that eliminating bottlenecks via custom routing can be an effective way to improve fairness. 3. An optimization solution to minimize the total cost of monitoring an overlay by determining the optimal mix of overlay and native links to monitor, and an analysis of the effect of topological properties on monitoring cost and the composition of the optimal mix of monitored links. We call our approach multi-layer monitoring and show that it is a flexible approach producing minimal-cost solutions with low errors. 4. A study of virtual network embedding in software defined networks (SDNs), identifying the challenges and opportunities for embedding in the SDN environment, and presenting two VN embedding techniques and their evaluation. One objective is to balance the stress on substrate components, and the other is to minimize the delays between VN controllers and switches. Each technique optimizes embedding for one objective while keeping the other within bounds.
29

Dynamic Composition of Service Specific Overlay Networks

Al Ridhawi, Yousif 09 April 2013 (has links)
Content delivery through service overlay networks has gained popularity due to the overlays’ abilities to provide effective and reliable services. Inefficiencies of one-to-one matching of user requirements to a single service have given rise to service composition. Customized media delivery can be achieved through dynamic compositions of Service Specific Overlay Networks (SSONs). However, the presence of SSONs in dynamic environments raises the possibility of unexpected failures and quality degradations. Thus constructing, managing, and repairing corrupted service paths are challenging dilemmas. This thesis investigates the problem of autonomous SSON construction and management and identifies the drawbacks of current approaches. A novel multi-layered, autonomous, self-adaptive framework for constructing SSONs is presented. The framework includes a Hybrid Service Overlay Network layer (H-SON). The H-SON is a dynamic hybrid overlay dedicated to service composition for multimedia delivery in dynamic networks. Node placement in the overlay depends on the node’s stability, types and quality of provided services. Changes in stability and QoS of service nodes are reflected by dynamic re-organizations of the overlay. The H-SON permits fast and efficient searches for component services that meet client functional and quality expectations. Self-managed overlay nodes coordinate their behaviors to formulate a service composition path that meets the client’s requirements. Two approaches are presented in this work. The first illustrates how SSONs are established through dynamically adaptable MS-designed plans. The imprecise nature of nonfunctional service characteristics, such as QoS, is modeled using a fuzzy logic system. Moreover, semantic similarity evaluations enable us to include, in compositions, those services whose operations match, semantically, the requirements of the composition plan. Plan-based composition solutions restrict service discovery to defined abstract models. Our second composition approach introduces a semantic similarity and nearness SSON composition method. The objective is to free service nodes from the adherence to restrictive composition plans. The presented work illustrates a service composition solution that semantically advances service composition paths towards meeting users’ needs with each service hop while simultaneously guaranteeing user-acceptable QoS levels. Simulation results showcase the effectiveness of the presented work. Gathered results validate the success of our service composition methods while meeting user requirements.
30

On P2P Networks and P2P-Based Content Discovery on the Internet

Memon, Ghulam 17 June 2014 (has links)
The Internet has evolved into a medium centered around content: people watch videos on YouTube, share their pictures via Flickr, and use Facebook to keep in touch with their friends. Yet, the only globally deployed service to discover content - i.e., Domain Name System (DNS) - does not discover content at all; it merely translates domain names into locations. The lack of persistent naming, in particular, makes content discovery, instead of domain discovery, challenging. Content Distribution Networks (CDNs), which augment DNSs with location-awareness, also suffer from the same problem of lack of persistent content names. Recently, several infrastructure- level solutions to this problem have emerged, but their fundamental limitation is that they fail to preserve the autonomy of network participants. Specifically, the storage requirements for resolution within each participant may not be proportional to their capacity. Furthermore, these solutions cannot be incrementally deployed. To the best of our knowledge, content discovery services based on peer-to-peer (P2P) networks are the only ones that support persistent content names. These services also come with the built-in advantage of scalability and deployability. However, P2P networks have been deployed in the real-world only recently, and their real-world characteristics are not well understood. It is important to understand these real-world characteristics in order to improve the performance and propose new designs by identifying the weaknesses of existing designs. In this dissertation, we first propose a novel, lightweight technique for capturing P2P traffic. Using our captured data, we characterize several aspects of P2P networks and draw conclusions about their weaknesses. Next, we create a botnet to demonstrate the lethality of the weaknesses of P2P networks. Finally, we address the weaknesses of P2P systems to design a P2P-based content discovery service, which resolves the drawbacks of existing content discovery systems and can operate at Internet-scale. This dissertation includes both previously published/unpublished and co-authored material.

Page generated in 0.067 seconds