• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 425
  • 45
  • 34
  • 26
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 573
  • 573
  • 573
  • 272
  • 201
  • 138
  • 132
  • 95
  • 72
  • 54
  • 53
  • 51
  • 48
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks

Zhao, Xiaogeng January 2008 (has links)
This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
422

Deploying DNSSEC in islands of security

Murisa, Wesley Vengayi 31 March 2013 (has links)
The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. / LaTeX with hyperref package / pdfTeX-1.40.10
423

Investigating call control using MGCP in conjuction with SIP and H.323

Jacobs, Ashley 14 March 2005 (has links)
Telephony used to mean using a telephone to call another telephone on the Public Switched Telephone Network (PSTN), and data networks were used purely to allow computers to communicate. However, with the advent of the Internet, telephony services have been extended to run on data networks. Telephone calls within the IP network are known as Voice over IP. These calls are carried by a number of protocols, with the most popular ones currently being Session Initiation Protocol (SIP) and H.323. Calls can be made from the IP network to the PSTN and vice versa through the use of a gateway. The gateway translates the packets from the IP network to circuits on the PSTN and vice versa to facilitate calls between the two networks. Gateways have evolved and are now split into two entities using the master/slave architecture. The master is an intelligent Media Gateway Controller (MGC) that handles the call control and signalling. The slave is a "dumb" Media Gateway (MG) that handles the translation of the media. The current gateway control protocols in use are Megaco/H.248, MGCP and Skinny. These protocols have proved themselves on the edge of the network. Furthermore, since they communicate with the call signalling VoIP protocols as well as the PSTN, they have to be the lingua franca between the two networks. Within the VoIP network, the numbers of call signalling protocols make it difficult to communicate with each other and to create services. This research investigates the use of Gateway Control Protocols as the lowest common denominator between the call signalling protocols SIP and H.323. More specifically, it uses MGCP to investigate service creation. It also considers the use of MGCP as a protocol translator between SIP and H.323. A service was created using MGCP to allow H.323 endpoints to send Short Message Service (SMS) messages. This service was then extended with minimal effort to SIP endpoints. This service investigated MGCP’s ability to handle call control from the H.323 and SIP endpoints. An MGC was then successfully used to perform as a protocol translator between SIP and H.323.
424

A proxy approach to protocol interoperability within digital audio networks

Igumbor, Osedum Peter January 2010 (has links)
Digital audio networks are becoming the preferred solution for the interconnection of professional audio devices. Prominent amongst their advantages are: reduced noise interference, signal multiplexing, and a reduction in the number of cables connecting networked devices. In the context of professional audio, digital networks have been used to connect devices including: mixers, effects units, preamplifiers, breakout boxes, computers, monitoring controllers, and synthesizers. Such networks are governed by protocols that define the connection management rocedures, and device synchronization processes of devices that conform to the protocols. A wide range of digital audio network control protocols exist, each defining specific hardware requirements of devices that conform to them. Device parameter control is achieved by sending a protocol message that indicates the target parameter, and the action that should be performed on the parameter. Typically, a device will conform to only one protocol. By implication, only devices that conform to a specific protocol can communicate with each other, and only a controller that conforms to the protocol can control such devices. This results in the isolation of devices that conform to disparate protocols, since devices of different protocols cannot communicate with each other. This is currently a challenge in the professional music industry, particularly where digital networks are used for audio device control. This investigation seeks to resolve the issue of interoperability between professional audio devices that conform to different digital audio network protocols. This thesis proposes the use of a proxy that allows for the translation of protocol messages, as a solution to the interoperability problem. The proxy abstracts devices of one protocol in terms of another, hence allowing all the networked devices to appear as conforming to the same protocol. The proxy receives messages on behalf of the abstracted device, and then fulfills them in accordance with the protocol that the abstracted device conforms to. Any number of protocol devices can be abstracted within such a proxy. This has the added advantage of allowing a common controller to control devices that conform to the different protocols.
425

Análise de desempenho de redes de comunicação wireless em aplicações de Smart Grid

Ortega, Alcides [UNESP] 29 September 2015 (has links) (PDF)
Made available in DSpace on 2016-02-05T18:29:44Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-09-29. Added 1 bitstream(s) on 2016-02-05T18:33:45Z : No. of bitstreams: 1 000857576.pdf: 7557798 bytes, checksum: 6e8a765ad9093e0c9fec6caab04c39b3 (MD5) / Nos últimos anos têm-se percebido que o atual sistema de geração, transmissão e distribuição de energia elétrica tradicional tornou-se insuficiente e ultrapassado para suprir toda a demanda. Com isso houve um forte crescimento nas pesquisas sobre um conceito com novas tecnologias para modernizar ou construir a rede de energia elétrica em seus aspectos de infraestrutura, disponibilidade, sustentabilidade e confiabilidade. Este novo conceito trata-se do smart grid que propõe uma tecnologia baseada na automação, comunicação, monitoração e controle da rede elétrica, a qual permite implantação de estratégias de controle e otimização da rede de forma eficiente perante as atualmente utilizadas. As tecnologias aplicadas no smart grid englobam muitos sistemas. Entre esses sistemas estão topologia de comunicação, circuitos e interfaces eletrônicas, na qual dificulta a construção de um cenário real para realizar estudos e pesquisas de qual sistema de comunicação é o mais adequado para as aplicações no smart grid. Os equipamentos são muito onerosos e dependendo da extensão e quantidade de dispositivos que serão implantados, tornam impossíveis as realizações em cenário de campo. Para sanar esses problemas são utilizados os softwares de simulação que são ferramentas convenientes nesses casos e comumente empregadas na área de pesquisa, para realizar a análise e o desempenho de um sistema. Uma rede de comunicação envolvendo aplicações críticas, como é o caso da transmissão e distribuição de energia elétrica, deve ser altamente confiável, extremamente segura e resiliente a falhas. Para realizar análises de desempenho de uma rede de comunicação wireless a ser utilizado em aplicações smart grid com resultados desejáveis, é preciso realizar simulações para poder prever qualquer eventualidade de possíveis falhas. De todos os simuladores existentes o mais indicado foi o NS-2, um simulador à evento... / In recent years have realized that the current system of generation, transmission and distribution of electric power has become traditional and outdated to meet the entire demand. With that there was a strong growth in research on a concept with new technologies to modernize or build the electric power grid in its aspects of infrastructure, availability, sustainability and reliability. This new concept that is the smart grid that proposes a technology based on automation, communication, monitoring and control of the power grid, which allows implementation of control strategies and grid optimization efficiently before the currently used. The technology applied in smart grid includes many systems. Among these are communication topology systems, circuits and electronic interfaces, which makes it difficult to build a real scenario to carry out studies and researches of which system is best suited for applications in the smart grid. The equipment is very expensive and depending on the extent and amount of devices that will be deployed, make impossible the achievements in the field setting. To remedy these problems are used simulation software that are convenient tools in these cases and commonly employed in the area of research, to conduct the analysis and performance of a system. A communication network involving critical applications, such as electric power transmission and distribution must be highly reliable, extremely safe and resilient. To perform analyses of performance of a wireless communication network to be used in smart grid applications with desirable results, you need to perform simulations in order to anticipate any eventuality of possible failures. Of all existing simulators as indicated was the NS-2, a discrete event simulator for open source networks that facilitates the creation of communication networks scenarios, taking into account the protocols involved in wired or wireless technologies. This software supports the ...
426

Análise de desempenho de redes de comunicação wireless em aplicações de Smart Grid /

Ortega, Alcides. January 2015 (has links)
Orientador: Ailton Akira Shinoda / Co-orientador: Fabrizio Granelli / Banca: Christiana Marie Schweitzer / Banca: Luis Carlos Origa de Oliveira / Banca: Eduardo Coelho Marques da Costa / Banca: João Henrique Kleinschmidt / Resumo: Nos últimos anos têm-se percebido que o atual sistema de geração, transmissão e distribuição de energia elétrica tradicional tornou-se insuficiente e ultrapassado para suprir toda a demanda. Com isso houve um forte crescimento nas pesquisas sobre um conceito com novas tecnologias para modernizar ou construir a rede de energia elétrica em seus aspectos de infraestrutura, disponibilidade, sustentabilidade e confiabilidade. Este novo conceito trata-se do smart grid que propõe uma tecnologia baseada na automação, comunicação, monitoração e controle da rede elétrica, a qual permite implantação de estratégias de controle e otimização da rede de forma eficiente perante as atualmente utilizadas. As tecnologias aplicadas no smart grid englobam muitos sistemas. Entre esses sistemas estão topologia de comunicação, circuitos e interfaces eletrônicas, na qual dificulta a construção de um cenário real para realizar estudos e pesquisas de qual sistema de comunicação é o mais adequado para as aplicações no smart grid. Os equipamentos são muito onerosos e dependendo da extensão e quantidade de dispositivos que serão implantados, tornam impossíveis as realizações em cenário de campo. Para sanar esses problemas são utilizados os softwares de simulação que são ferramentas convenientes nesses casos e comumente empregadas na área de pesquisa, para realizar a análise e o desempenho de um sistema. Uma rede de comunicação envolvendo aplicações críticas, como é o caso da transmissão e distribuição de energia elétrica, deve ser altamente confiável, extremamente segura e resiliente a falhas. Para realizar análises de desempenho de uma rede de comunicação wireless a ser utilizado em aplicações smart grid com resultados desejáveis, é preciso realizar simulações para poder prever qualquer eventualidade de possíveis falhas. De todos os simuladores existentes o mais indicado foi o NS-2, um simulador à evento... / Abstract: In recent years have realized that the current system of generation, transmission and distribution of electric power has become traditional and outdated to meet the entire demand. With that there was a strong growth in research on a concept with new technologies to modernize or build the electric power grid in its aspects of infrastructure, availability, sustainability and reliability. This new concept that is the smart grid that proposes a technology based on automation, communication, monitoring and control of the power grid, which allows implementation of control strategies and grid optimization efficiently before the currently used. The technology applied in smart grid includes many systems. Among these are communication topology systems, circuits and electronic interfaces, which makes it difficult to build a real scenario to carry out studies and researches of which system is best suited for applications in the smart grid. The equipment is very expensive and depending on the extent and amount of devices that will be deployed, make impossible the achievements in the field setting. To remedy these problems are used simulation software that are convenient tools in these cases and commonly employed in the area of research, to conduct the analysis and performance of a system. A communication network involving critical applications, such as electric power transmission and distribution must be highly reliable, extremely safe and resilient. To perform analyses of performance of a wireless communication network to be used in smart grid applications with desirable results, you need to perform simulations in order to anticipate any eventuality of possible failures. Of all existing simulators as indicated was the NS-2, a discrete event simulator for open source networks that facilitates the creation of communication networks scenarios, taking into account the protocols involved in wired or wireless technologies. This software supports the ... / Doutor
427

Arquiteturas de redes de armazenamento de dados / Storage networks architectures

Almeida, Ariovaldo Veiga de 26 July 2006 (has links)
Orientador: Nelson Luis Saldanha da Fonseca / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-10T07:17:50Z (GMT). No. of bitstreams: 1 Almeida_AriovaldoVeigade_M.pdf: 2189352 bytes, checksum: 462a8f6e902e4a7c90c190b1322de0e5 (MD5) Previous issue date: 2006 / Resumo: As Redes de Armazenamento de Dados oferecem aos sistemas computacionais acesso consolidado e compartilhado aos dispositivos de armazenamento de dados, aumentando sua eficiência e disponibilidade. Elas permitem que os dispositivos de armazenamento de dados de diferentes fornecedores, mesmo que usem diferentes protocolos de acesso, possam ser logicamente disponibilizados para acesso. Elas permitem que as funções de gerenciamento de dados, como backup e recuperação, replicação de dados, ambientes de recuperação de desastres, e migração de dados, possam ser realizados de maneira rápida e eficiente, com o mínimo de sobrecarga nos sistemas computacionais. Na década de 80, observou-se a descentralização dos sistemas computacionais que evoluíram dos ambientes centralizados, como no caso dos sistemas mainframe, para plataformas distribuídas, onde os sistemas eram separados em blocos operacionais, com cada um dos blocos realizando uma função específica. Não foram somente os sistemas computacionais que evoluíram, mas também os sistemas de armazenamento de dados evoluiram para arquiteturas distribuídas. A evolução natural dos dispositivos de armazenamento de dados dos sistemas computacionais foi do uso de conexão direta e dedicada aos computadores para uma forma mais flexível e compartilhada. A forma adotada foi através do uso de infra-estruturas das redes de computadores. Este trabalho analisa as tecnologias das redes de armazenamento de dados Storage Área Networks (SAN) e Network Attached Storage (NAS), que são as principais arquiteturas que utilizam as tecnologias de redes para o armazenamento e compartilhamento de dados. Enfoca-se as vantagens decorrentes dessas arquiteturas quando comparadas com a forma tradicional de conexão direta do dispositivo de armazenamento de dados aos computadores, a denominada arquitetura Direct Attached Storage (DAS) / Abstract: Storage Networks offer shared access to data storage devices, increasing the efficiency and the availability of storage data. They allow data storage devices, from different suppliers, using different access protocols, to be logically available for access. They also allow management of data, backup and recovery, data replication, disaster recovery environments, and data migration can be done in a fast and efficient way, with minimum overhead to the computer systems. In the 80¿s, we observed the decentralization of the computational systems. They evolved from a centralized environment to distributed platforms, where systems were separated in operational blocks, with each block executing specific functions. Both the computational systems and the storage envolved to a distributed architecture. The natural evolution of the storage devices was to move from the direct connection to computational systems to a more flexible and shared approach. This happened by the adoption of infrastructures used by computer networks. This work analyzes Storage Networks architectures: Storage Area Network (SAN) and Network Attached Storage (NAS), which are the main architectures that employ computer networks technologies. We will show the advantages of these architectures compared to the traditional form of direct connection of storage devices to computers, the named Direct Attached Storage (DAS) architecture / Mestrado / Redes de Computadores / Mestre em Computação
428

Avaliação da qualidade de chamadas VoIP cifradas usando Mean opinion score e Traffic control / Evaluation of quality in encrypted VoIP calls using Mean opinion score and Traffic control

Barison, Dherik 17 August 2018 (has links)
Orientador: Leonardo de Souza Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-17T15:12:52Z (GMT). No. of bitstreams: 1 Barison_Dherik_M.pdf: 8956445 bytes, checksum: 9549370d08573863a0f2bab25d076945 (MD5) Previous issue date: 2010 / Resumo: A proposta desta dissertação é avaliar a qualidade de chamadas VoIP cifradas com diferentes algoritmos de criptografia através do OpenVPN, com o objetivo de identificar as diferenças de resultados entre os algoritmos de criptografia e também entre as chamadas cifradas e as não cifradas. Esta avaliação ocorrerá utilizando o MOS (Mean Opinion Score), um método que permite indicar a satisfação do usuário quanto a qualidade da comunicação. As chamadas VoIP cifradas irão ocorrer em diferentes cenários de rede que apresentam diversos problemas, tais como perda de pacotes, pacotes fora de seqüência, atraso, largura de banda de rede, etc. Estes cenários foram baseados em algumas situações reais de uso e serão emulados através da ferramenta Traffic Control do Linux, capaz de manipular os pacotes enviados por qualquer uma das interfaces de rede. Os cenários também terão diferentes larguras de banda de rede, para avaliar a influência das mesmas em algumas situações. / Abstract: The purpose of this work is to evaluate the quality of encrypted VoIP calls with different cipher algorithms through OpenVPN software, in order to identify differences in results between encryption algorithms and also differences between non-encrypted and encrypted calls. This evaluation will do by the MOS (Mean Opinion Score), a method to indicate user satisfaction of communication quality. The encrypted VoIP calls will occur in different network scenarios that present different problems, like packet loss, out-of-order packets, delay, network bandwidths, etc. These scenarios were based on some real situations of use and will be emulated with the Traffic Control tool from Linux, able of handling the packages sent by any available network interface. The scenarios will also have different network bandwidths to assess its importance in some situations. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
429

Protocolos multicoordenados de acordo e o serviço de log / Multicoordinated agreement problems and the log service

Camargos, Lásaro Jonas 12 December 2008 (has links)
Orientador: Edmundo R. M. Madeira, Fernando Pedone / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-13T10:28:10Z (GMT). No. of bitstreams: 1 Camargos_LasaroJonas_D.pdf: 1941705 bytes, checksum: 23f0f1380c7d6262497ec13b43519301 (MD5) Previous issue date: 2008 / Resumo: Problemas de acordo, como Consenso, Terminação Atômica e Difusão Atômica, são abstrações comuns em sistemas distribuídos. Eles ocorrem quando os componentes do sistema precisam concordar em reconfigurações, mudanças de estado ou em linhas de ação em geral. Nesta tese, investigamos estes problemas no contexto do ambiente e aplicações em que serão utilizados. O modelo geral é o assíncrono sujeito a quebras com possível posterior recuperação. Nossa meta é desenvolver protocolos que explorem esta informação contextual para prover maior disponibilidade, e que se mantenham corretos mesmo que algumas das prerrogativas do contexto tornem-se inválidas. Na primeira parte da tese, exploramos a seguinte propriedade: mensagens difundidas em pequenas redes tendem a ser entregues ordenada e confiavelmente. Nós fazemos três contribuições nesta parte da tese. A primeira é a transformação de algoritmos conhecidos para o modelo quebra-e-pára, que utilizam a propriedade de ordenação mencionada, em protocolos práticos. Isto é, protocolos que toleram perda de mensagens e recuperação após a quebra. Nossos protocolos garantem progresso na presença de falhas, contanto que mensagens sejam espontaneamente ordenadas freqüentemente. Na ausência de ordenação expontânea, outras prerrogativas são necessárias para contornar falhas. A segunda contribuição é a generalização de um dos algoritmos citados acima em um modo de execução "multi-coordenado" em um protocolo híbrido de consenso, que usa ou ordenação expontânea ou detecção de falhas para progredir. Em comparação a outros protocolos, o nosso provê maior disponibilidade sem comprometer resiliência. A terceira contribuição é a utilização do modo multi-coordenado para resolver Consenso Generalizado, um problema que generaliza uma série de outros e que, portanto, é de grande interesse prático. Além disso, fizemos diversas considerações sobre aspectos práticos da utilização deste protocolo. Como resultado, nosso protocolo perde desempenho gradualmente no caso de condições desfavoráveis, permite o balanceamento de carga sobre os coordenadores, e acessa a memória estável parcimoniosamente. Na segunda parte da tese, consideramos problemas de acordo no contexto de redes organizadas hierarquicamente. Em específico, nós consideramos uma topologia usada nos data centers de grandes cooporações: grupos de máquinas conectadas internamente por links de baixa latência, mas por links mais lentos entre grupos. Em tais cenários, latência é claramente um fator importante e reconfigurações, onerosas aos protocolos, devem ser evitadas tanto quanto possível. Nossa contribuição neste tópico está em evitar reconfigurações e melhorar a disponibilidade de um protocolo de acordo que é rápido a despeito de colisões. Isto é, um protocolo que consegue chegar a uma decisão em dois passos inter-grupos mesmo quando várias propostas são feitas concorrentementes. Além do uso da técnica de multicoordenação, nós usamos primitivas de multicast e consenso para conter algumas reconfigurações dentro dos grupos, onde seus custos são menores. Na última parte da tese nós estudamos o problema de terminação de transações distribuídas. O problema consiste em garantir que os vários participantes da transação concordem em aplicar ou cancelar de forma consistente as suas operações no contexto da transação. Além disso, é necessário garantir a durabilidade das alterações feitas por transações terminadas com sucesso. Nossa contribuição neste tópico é um serviço de log que abstrai e desassocia a terminação de transações dos processos que executam tais transações. O serviço funciona como uma caixa preta e permite que resource managers lentos ou falhos sejam reiniciados em servidores diferentes, sem dependências na memória estável do servidor em que executava anteriormente. Nós apresentamos e avaliamos experimentalmente duas implementações do serviço. / Abstract: Agreement problems are a common abstraction in distributed systems. They appear when the components of the system must concur on reconfigurations, changes of state, or in lines of action in general. Examples of agreement problems are Consensus, Atomic Commitment, and Atomic Broadcast. In this thesis we investigate these abstractions in the context of the environment in which they will run and the applications that they will serve; in general, we consider the asynchronous crash-recovery model. The goal is to devise protocols that explore the contextual information to deliver improved availability. The correctness of our protocols holds even when the extra assumptions do not. In the first part of this thesis we explore the following property: messages broadcast in small networks tend to be delivered in order and reliably. We make three contributions in this part. The first contribution is to turn known Consensus algorithms that harness this ordering property to reach agreement in the crash-stop model into practical protocols. That is, protocols that tolerate message losses and recovery after crashes, efficiently. Our protocols ensure progress even in the presence of failures, if spontaneous ordering holds frequently. In the absence of spontaneous ordering, some other assumption is required to cope with failures. The second contribution of this thesis is to generalize one of our crash-recovery consensus protocols as a "multicoordinated" mode of a hybrid Consensus protocol, that may use spontaneous ordering or failure detection to progress. Compared to other protocols, ours provide improved availability with no price in resilience. The third contribution is to employ this new mode to solve Generalized Consensus, a problem that generalizes a series of other agreement problems and, hence, is of much practical interest. Moreover, we considered several aspects of solving this problem in practice, which had not been considered before. As a result, our Generalized Consensus protocol features graceful degradation, load balancing, and is parsimonious in accessing stable storage. In the second part of this thesis we have considered agreement problems in wide area networks organized hierarchically. More specifically, we considered a topology that is commonplace in the data centers of large corporations: groups of nodes, with large-bandwidth low-latency links connecting the nodes in the same group, and slow and limited links connecting nodes in different groups. In such environments, latency is clearly a major concern and reconfiguration procedures that render the agreement protocol momentarily unavailable must be avoided as much as possible. Our contribution here is in avoiding reconfigurations and improving the availability of a collision fast agreement protocol. That is, a protocol that can reach agreement in two intergroup communication steps, irrespectively to concurrent proposals. Besides the use of a multicoordinated approach, we employed multicast primitives and consensus to restrict some reconfigurations to within groups, where they are less expensive. In the last part of this thesis we study the problem of terminating distributed transactions. The problem consists of enforcing agreement among the parties on whether to commit or rollback the transaction and ensuring the durability of committed transactions. Our contribution in this topic is an abstract log service that detaches the termination problem from the processes actually performing the transactions. The service works as a black box and abstracts its implementation details from the application utilizing it. Moreover, it allows slow and failed resource managers be re-started on different hosts without relying on the stable storage of the previous host. We provide two implementations of the service, which we evaluated experimentally. / Doutorado / Doutor em Ciência da Computação
430

Uma abordagem cognitiva para auto-configuração de protocolos de comunicação / A cognitive approach to self-configuration of communication protocols

Malheiros, Neumar Costa, 1981- 23 August 2018 (has links)
Orientador: Edmundo Roberto Mauro Madeira, Nelson Luis Saldanha da Fonseca / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T03:41:55Z (GMT). No. of bitstreams: 1 Malheiros_NeumarCosta_D.pdf: 3612194 bytes, checksum: e5c84ef5d3c3f025b775cde399720ea6 (MD5) Previous issue date: 2013 / Resumo: À medida que novas tecnologias de rede são desenvolvidas, torna-se mais complexa a tarefa de gerenciar os serviços e protocolos de comunicação. Diante de tal complexidade, a gerência das redes de comunicação atuais representa um grande desafio. As soluções de gerência tradicionais, com uma arquitetura centralizada, não apresentam um alto grau de escalabilidade e não são capazes de prover reconfiguração dinâmica dos protocolos de comunicação em reposta às constantes mudanças nas condições da rede. Neste trabalho, apresenta-se uma solução factível e eficaz para auto-configuração de protocolos de comunicação. Propõe-se uma abordagem cognitiva para a reconfiguração dinâmica de parâmetros de protocolos a fim de evitar a degradação de desempenho resultante de eventuais alterações nas condições da rede. O arcabouço proposto, denominado CogProt, provê, através de mecanismos de aprendizagem e decisão, o ajuste de parâmetros dos protocolos durante a operação da rede. Parâmetros de interesse são periodicamente reconfigurados de acordo com informações de monitoramento a fim de aumentar o desempenho médio do sistema como um todo. A abordagem proposta é descentralizada e pode ser aplicada no ajuste dinâmico de uma ampla variedade de protocolos em diferentes camadas da arquitetura da rede. Apresenta-se uma série de estudos de caso para ilustrar a aplicação da abordagem proposta. Estudos baseados em simulação e um experimento em um ambiente de rede real foram realizados para avaliar o desempenho do arcabouço CogProt. Os resultados demonstram a eficácia da abordagem proposta em reagir prontamente às mudanças no estado da rede e melhorar o desempenho médio dos protocolos / Abstract: As network technologies evolve, the complexity of managing communication infrastructures and protocols increases. Such complexity makes the management of current communication networks a major challenge. Traditional centralized solutions for network management are not scalable and are incapable of providing continuous reconfiguration of network protocols in response to time-varying conditions. In this work, we present a feasible and effective solution for self-configuration of communication protocols. We propose a cognitive approach for dynamic reconfiguration of protocol parameters in order to avoid performance degradation as a consequence of changing network conditions. The proposed cognitive framework, called CogProt, provides runtime adjustment of protocol parameters through learning and reasoning mechanisms. It periodically reconfigures the parameters of interest based on acquired knowledge to improve system-wide performance. The proposed approach is decentralized and can be applied to runtime adjustment of a wide range of protocol parameters at different layers of the protocol stack. We present a number of case studies to illustrate the application of the proposed approach. Both simulation and wide-area network experiments were performed to evaluate the performance of the proposed approach. Results demonstrate the effectiveness of the proposed approach to improve overall performance for different network scenarios and also to avoid performance degradation by timely reacting to network changes / Doutorado / Ciência da Computação / Doutor em Ciência da Computação

Page generated in 0.0921 seconds