• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 25
  • 21
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 199
  • 199
  • 199
  • 100
  • 43
  • 36
  • 34
  • 34
  • 32
  • 32
  • 30
  • 25
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Injeção de ataques baseado em modelo para teste de protocolos de segurança / Model-based attack injection for security protocols testing

Morais, Anderson Nunes Paiva 14 August 2018 (has links)
Orientadores: Eliane Martins, Ricardo de Oliveira Anido / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-14T04:24:04Z (GMT). No. of bitstreams: 1 Morais_AndersonNunesPaiva.pdf: 1792317 bytes, checksum: e8304b24c7765a959814665bcaff15c8 (MD5) Previous issue date: 2009 / Resumo: Neste trabalho apresentamos uma proposta de geração de ataques para testes de protocolos de segurança. O objetivo é detectar vulnerabilidades de um protocolo, que um atacante pode explorar para causar falhas de segurança. Nossa proposta usa um injetor de falhas para emular um atacante que possui total controle do sistema de comunicação. Como o sucesso dos testes depende principalmente dos ataques injetados, nós propomos uma abordagem baseada em modelos para a geração de ataques. O modelo representa ataques conhecidos e reportados do protocolo sob teste. A partir deste modelo, cenários de ataque são gerados. Os cenários estão em um formato que é independente do injetor de falhas usado. Usando refinamentos e transformações, pode-se converter a descrição do cenário de ataque em scripts específicos do injetor de falhas. A proposta pode ser completamente apoiada por ferramentas de software. Nós ilustramos o uso da proposta com um estudo de caso, um protocolo de segurança para dispositivos móveis / Abstract: We present an attack injection approach for security protocols testing. The goal is to uncover protocol vulnerabilities that an attacker can exploit to cause security failures. Our approach uses a fault injector to emulate an attacker that has control over the communication system. Since the success of the tests depends greatly on the attacks injected, we propose a model-based approach for attack generation. The model represents reported known attacks to the protocol under test. From this model, attack scenarios are generated. The scenarios are in a format that is independent of the fault injector used. Using refinements and transformations, the abstract scenario specification can be converted to the specific fault injector scripts. The approach can be completely supported by tools. We illustrate the use of the approach in a case study, a security protocol for mobile devices / Universidade Estadual de Campi / Tolerancia a Falhas / Mestre em Ciência da Computação
182

Uma solução de alta disponibilidade para o sistema de arquivos distribuidos do Hadoop / A high availability solution for the Hadoop distributed file system

Oriani, André, 1984- 22 August 2018 (has links)
Orientador: Islene Calciolari Garcia / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T22:11:10Z (GMT). No. of bitstreams: 1 Oriani_Andre_M.pdf: 3560692 bytes, checksum: 90ac96e4274dea19b7bcaec78aa959f8 (MD5) Previous issue date: 2013 / Resumo: Projetistas de sistema geralmente optam por sistemas de arquivos baseados em cluster como solução de armazenamento para ambientes de computação de alto desempenho. A razão para isso é que eles provêm dados com confiabilidade, consistência e alta vazão. Porém a maioria desses sistemas de arquivos emprega uma arquitetura centralizada, o que compromete sua disponibilidade. Este trabalho foca especificamente em um exemplar de tais sistemas, o Hadoop Distributed File System (HDFS). O trabalho propõe um hot standby para o nó mestre do HDFS a fim de conferir-lhe alta disponibilidade. O hot standby é implementado por meio da (i) extensão da replicação de estado do mestre realizada por seu checkpoint helper, o Backup Node; e por meio da (ii) introdução de um mecanismo automático de failover. O passo (i) aproveitou-se da técnica de duplicação de mensagens desenvolvida por outra técnica de alta disponibilidade para o HDFS chamada Avatar Nodes. O passo (ii) empregou ZooKeeper, um serviço distribuído de coordenação. Essa estratégia resultou em mudanças de código pequenas, cerca de 0,18% do código original, o que faz a solução ser de fácil estudo e manutenção. Experimentos mostraram que o custo adicional imposto pela replicação não aumentou em mais de 11% o consumo médio de recursos pelos nós do sistema nem diminuiu a vazão de dados comparando-se com a versão original do HDFS. A transição completa para o hot standby pode tomar até 60 segundos quando sob cargas de trabalho dominadas por operações de E/S, mas menos de 0,4 segundos em cenários com predomínio de requisições de metadados. Estes resultados evidenciam que a solução desenvolvida nesse trabalho alcançou seus objetivos de produzir uma solução de alta disponibilidade para o HDFS com baixo custo e capaz de reagir a falhas em um breve espaço de tempo / Abstract: System designers generally adopt cluster-based file systems as the storage solution for high-performance computing environments. That happens because they provide data with reliability, consistency and high throughput. But most of those fie systems employ a centralized architecture which compromises their availability. This work focuses on a specimen of such systems, the Hadoop Distributed File System (HDFS). A hot standby for the master node of HDFS is proposed in order to bring high availability to the system. The hot standby was achieved by (i) extending the master's state replication performed by its checkpointer helper, the Backup Node; and by (ii) introducing an automatic failover mechanism. Step (i) took advantage of the message duplication technique developed by other high availability solution for HDFS named AvatarNodes. Step (ii) employed ZooKeeper, a distributed coordination service. That approach resulted on small code changes, around 0.18% of the original code, which makes the solution easy to understand and to maintain. Experiments showed that the overhead implied by replication did not increase the average resource consumption of system nodes by more than 11% nor did it diminish the data throughput compared to the original version of HDFS. The complete transition for the hot standby can take up to 60 seconds on workloads dominated by I/O operations, but less than 0.4 seconds when there is predominance of metadata requisitions. Those results show that the solution developed on this work achieved the goals of producing a high availability solution for the HDFS with low overhead and short reaction time to failures / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
183

Método Ágil aplicado ao desenvolvimento de software confiável baseado em componentes / Reliable component-based software development with Agile Method

Braz, Alan, 1980- 23 August 2018 (has links)
Orientador: Cecília Mary Fischer Rubira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T09:09:24Z (GMT). No. of bitstreams: 1 Braz_Alan_M.pdf: 1903353 bytes, checksum: 9bff9aefdcc11d6d8fe46490302d6291 (MD5) Previous issue date: 2013 / Resumo: Os Métodos Ágeis, ou Desenvolvimento Ágil de Software (DAS), tem se popularizado, na última década, por meio de métodos como Extreme Programming (XP) e Scrum e isso fez com que fossem aplicadas no desenvolvimento de sistemas computacionais de diversos tamanhos, complexidades técnica e de domínio, e de rigor quanto à confiabilidade. Esse fato evidencia a necessidade de processos de desenvolvimento de software que sejam mais rigorosos e que possuam uma quantidade adequada de modelagem e documentação, em especial no que concerne ao projeto arquitetural, com o objetivo de garantir maior qualidade no seu resultado final. A confiabilidade pode ser alcançada adicionando elementos de tratamento de exceções às fases iniciais do processo de desenvolvimento e à reutilização de componentes. O tratamento de exceções tem sido uma técnica muito utilizada na verificação e na depuração de erros em sistemas de software. O MDCE+ é um método que auxilia a modelagem do comportamento excepcional de sistemas baseados em componentes que, por ser centrado na arquitetura, melhora a definição e a análise do fluxo de exceções entre os componentes do sistema. Este trabalho propõe uma solução para guiar o desenvolvimento de sistemas confiáveis baseados em componentes por meio da adição de práticas do MDCE+ ao Scrum, resultando no método Scrum+CE (Scrum com Comportamento Excepcional). Esse processo passa a expor os requisitos excepcionais em nível das Estórias de Usuário, adiciona testes de aceitação mais detalhados, obriga a criação do artefato de Arquitetura Inicial e adiciona um novo papel de Dono da Arquitetura. Como forma de avaliar esse método proposto, foi realizado um experimento controlado com três equipes, que desenvolveram um sistema com requisitos de confiabilidade, utilizando Scrum e Scrum+CE. Foram coletadas métricas para comparar a eficiência do novo processo e o resultado obtido, com a utilização do Scrum+CE, foi à produção de software com melhor qualidade, porém com menor número de funcionalidades / Abstract: Agile Software Development (ASD) has been on mainstream through methodologies such as Extreme Programming (XP) and Scrum in the last decade enabling them to be applied in the development of computer systems of various size, technical and domain complexity and degress of reliability. This fact highlights the need for software development processes that are accurate and have an adequate amount of modeling and documentation, especially regarding the architectural design, aiming to increase the quality of the end result. The reliability can be achieved by adding elements of exception handling at early stages of development and through components reuse. Exception handling has been a widely used technique in detecting and fixing errors in software systems. The MDCE+ is a method that assists exceptional behavior modeling at components based systems, which is architecture-centric what improves the definition and flow analysis of exceptions between system components. This paper proposes a solution to guide the development of reliable systems based on components by adding MDCE+ practices to Scrum, resulting in the Scrum+CE method (Scrum with Exceptional Behavior). This process exposes the exceptional requirements, at the User Stories level, documents acceptance tests with more details, requires the creation of a high-level architecture artifact and adds a new role of Architecture Owner. In order to evaluate this proposed method, a controlled experiment was conducted with three teams, who developed a system with reliability requirements using Scrum and Scrum+CE. We collected metrics to compare the efficiency of the new process and the result was the production of software with better quality but with less features using Scrum+CE / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
184

Fault detection in autonomous robots

Christensen, Anders Lyhne 27 June 2008 (has links)
In this dissertation, we study two new approaches to fault detection for autonomous robots. The first approach involves the synthesis of software components that give a robot the capacity to detect faults which occur in itself. Our hypothesis is that hardware faults change the flow of sensory data and the actions performed by the control program. By detecting these changes, the presence of faults can be inferred. In order to test our hypothesis, we collect data in three different tasks performed by real robots. During a number of training runs, we record sensory data from the robots both while they are operating normally and after a fault has been injected. We use back-propagation neural networks to synthesize fault detection components based on the data collected in the training runs. We evaluate the performance of the trained fault detectors in terms of the number of false positives and the time it takes to detect a fault.<p>The results show that good fault detectors can be obtained. We extend the set of possible faults and go on to show that a single fault detector can be trained to detect several faults in both a robot's sensors and actuators. We show that fault detectors can be synthesized that are robust to variations in the task. Finally, we show how a fault detector can be trained to allow one robot to detect faults that occur in another robot.<p><p>The second approach involves the use of firefly-inspired synchronization to allow the presence of faulty robots to be determined by other non-faulty robots in a swarm robotic system. We take inspiration from the synchronized flashing behavior observed in some species of fireflies. Each robot flashes by lighting up its on-board red LEDs and neighboring robots are driven to flash in synchrony. The robots always interpret the absence of flashing by a particular robot as an indication that the robot has a fault. A faulty robot can stop flashing periodically for one of two reasons. The fault itself can render the robot unable to flash periodically.<p>Alternatively, the faulty robot might be able to detect the fault itself using endogenous fault detection and decide to stop flashing.<p>Thus, catastrophic faults in a robot can be directly detected by its peers, while the presence of less serious faults can be detected by the faulty robot itself, and actively communicated to neighboring robots. We explore the performance of the proposed algorithm both on a real world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.<p><p>We conclude that i) fault injection and learning can give robots the capacity to detect faults that occur in themselves, and that ii) firefly-inspired synchronization can enable robots in a swarm robotic system to detect and communicate faults.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
185

Fast And Efficient Submesh Determination In Faulty Tori

Pranav, R 12 1900 (has links) (PDF)
No description available.
186

Efficient Fault Tolerance In Chip Multiprocessors Using Critical Value Forwarding

Subramanyan, Pramod 06 1900 (has links) (PDF)
Relentless CMOS scaling coupled with lower design tolerances is making ICs increasingly susceptible to transient faults, wear-out related permanent faults and process variations. Decreasing CMOS reliability implies that high-availability systems which were previously restricted to the domain of mainframe computers or specially designed fault-tolerant systems may be come important for the commodity market as well. In this thesis we tackle the problem of enabling efficient, low cost and configurable fault-tolerance using Chip Multiprocessors (CMPs). Our work studies architectural fault detection methods based on redundant execution, specifically focusing on “leader-follower” architectures. In such architectures redundant execution is performed on two cores/threads of a CMP. One thread acts as the leading thread while the other acts as the trailing thread. The leading thread assists the execution of the trailing thread by forwarding the results of its execution. These forwarded results are used as predictions in the trailing thread and help improve its performance. In this thesis, we introduce a new form of execution assistance called critical value forwarding. Critical value forwarding uses heuristics to identify instructions on the critical path of execution and forwards the results of these instructions to the trailing core. The advantage of critical value forwarding is that it provides much of the speed up obtained by forwarding all values at a fraction of the bandwidth cost. We propose two architectures to exploit the idea of critical value forwarding. The first of these operates the trailing core at lower voltage/frequency levels in order to provide energy-efficient redundant execution. In this context, we also introduce algorithms to dynamically adapt the voltage/frequency level of the trailing core based on program behavior. Our experimental evaluation shows that this proposal consumes only 1.26 times the energy of a non-fault-tolerant baseline and has a mean performance overhead of about 1%. We compare our proposal to two previous energy-efficient fault-tolerant CMP proposals and find that our proposal delivers higher energy-efficiency and lower performance degradation than both while providing a similar level of fault coverage. Our second proposal uses the idea of critical value forwarding to improve fault-tolerant CMP throughput. This is done by using coarse-grained multithreading to mul-tiplex trailing threads on a single core. Our evaluation shows that this architecture delivers 9–13% higher throughput than previous proposals, including one configuration that uses simultaneous multithreading(SMT) to multiplex trailing threads. Since this proposal increases fault-tolerant CMP throughput by executing multiple threads on a single core, it comes at a modest cost in single-threaded performance, a mean slowdown between11–14%.
187

Uma infraestrutura autoadaptativa baseada em linhas de produtos de software para composições de serviços tolerantes a falhas / A self-adaptive infrastructure based on software product line for fault-tolerant composite services

Nascimento e Silva, Amanda Sávio, 1982- 24 August 2018 (has links)
Orientador: Cecília Mary Fischer Rubira / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-24T09:31:42Z (GMT). No. of bitstreams: 1 NascimentoeSilva_AmandaSavio_D.pdf: 8931458 bytes, checksum: 448739042e9597c70abf120976d316b8 (MD5) Previous issue date: 2013 / Resumo: A confiabilidade é um requisito de qualidade indispensável a muitos sistemas orientados a serviços, cada vez mais disseminados em várias atividades humanas. Composições confiáveis de serviços são formadas por um conjunto de serviços com diversidade de projetos, isto é, um conjunto de serviços funcionalmente equivalentes, ou serviços alternativos, usados para implementar técnicas de tolerância a falhas. Uma determinada técnica, como por exemplo, Recovery Blocks ou N-version Programming, pode ser mais adequada para um contexto específico de execução do que outra, dependendo dos requisitos exigidos pela aplicação, como por exemplo, desempenho. Sistemas orientados a serviços são usualmente implantados num ambiente altamente dinâmico, em que são comuns alterações nos requisitos dos clientes e flutuações na qualidade de serviços. Portanto, uma composição de serviços confiável deveria poder modificar seu próprio comportamento dinamicamente em resposta a essas mudanças. Entretanto, as soluções existentes, que usam diversidade de projetos para implementar composições confiáveis, apresentam algumas limitações: (i) não apóiam a seleção de serviços alternativos adequados que garantam que a composição realmente tolere falhas de software; (ii) em geral implementam uma única técnica de tolerância a falhas, não apoiando os requisitos diversos de clientes; e (iii) não apoiam um mecanismo autoadaptativo capaz de mudar a estratégia de tolerância a falhas em tempo de execução. Nessa tese, é apresentada uma solução baseada em linhas de produtos de software, que explora a variabilidade de software existente nas técnicas de tolerância a falhas e nas mudanças ocorridas no ambiente de execução, para a implementação de composições de serviços tolerantes a falhas e autoadaptativas. A solução encompassa: (a) um conjunto de diretrizes para investigar até que ponto serviços alternativos são realmente diversos entre si para tolerar falhas de software; (b) uma família de técnicas de tolerância a falhas para construir composições confíaveis que permite a escolha de uma técnica mais adequada para o contexto; e (c) uma infraestrutura autoadaptiva que apoia a instanciação de técnicas diferentes de tolerância a falhas como resposta a mudanças ocorridas no contexto, baseando-se no gerenciamento dinâmico de variabilidades de software. Resultados de estudos empíricos sugerem que a solução é eficiente para apoiar composições de serviços tolerantes a falhas e autoadaptativas. Direções para trabalhos futuros são apresentadas / Abstract: Nowadays, society is dependent on systems based on Service-Oriented Architecture (SOA) for its basic day-to-day functioning. As a consequence, these systems should be reliable. Fault-tolerant service compositions encompass a set of services, each with equivalent functionality yet different designs, called alternate services, that are used to implement fault tolerance techniques. A particular technique, for example, Recovery Blocks or N-version Programming, might be more suitable in a context than in another one, depending on non-functional requirements of an application, for example, performance or reliability. SOA-based applications often rely in an environment that is highly dynamic and several decisions should be postponed until runtime, where we have different stakeholders with conflicting requirements, and fluctuations in the quality of services (QoS) are recurrent. Therefore, a fault-tolerant service composition should adapt itself to meet the dynamically and widely changing context. Nevertheless, the existing diversity-based solutions for fault-tolerant service compositions present some drawbacks: (i) they do not support the selection of alternate services that in fact efficient to support a reliable service composition; (ii) they usually support only one fault tolerance technique, thus not being able to face various clients' requirements; (iii) they do not support an adaptive fault tolerance mechanism able to instantiate different fault tolerance strategies at runtime to cope with dynamic changes in the context. In this thesis, we present a solution based on software product line, which explores the variability among various software fault tolerance techniques and changes in the execution environment, to implement fault-tolerant and self-adaptive service compositions. The proposed solution encompasses: (a) a set of directives to investigate to what extent alternate services are able to tolerate software faults; (b) a family of software fault tolerance techniques to support reliable service compositions, such as the most suitable technique can be chosen according to the context; (c) a self-adaptive infrastructure to instantiate at runtime appropriate fault tolerance techniques in response to changes in the context, through dynamic management of software variability. Results from empirical studies suggest that the proposed solution is efficient to support fault-tolerant and self-adaptive service compositions. Directions for future work are also presented / Doutorado / Ciência da Computação / Doutora em Ciência da Computação
188

Risk-based proactive availability management - attaining high performance and resilience with dynamic self-management in Enterprise Distributed Systems

Cai, Zhongtang 10 January 2008 (has links)
Complex distributed systems such as distributed information flows systems which continuously acquire manipulate and disseminate information across an enterprise's distributed sites and machines, and distributed server applications co-deployed in one or multiple shared data centers, with each of them having different performance/availability requirements that vary over time and competing with each other for the shared resources, have been playing a more serious role in industry and society now. Consequently, it becomes more important for enterprise scale IT infrastructure to provide timely and sustained/reliable delivery and processing of service requests. This hasn't become easier, despite more than 30 years of progress in distributed computer connectivity, availability and reliability, if not more difficult~cite{ReliableDistributedSys}, because of many reasons. Some of them are, the increasing complexity of enterprise scale computing infrastructure; the distributed nature of these systems which make them prone to failures, e.g., because of inevitable Heisenbugs in these complex distributed systems; the need to consider diverse and complex business objectives and policies including risk preference and attitudes in enterprise computing; the issues of performance and availability conflicts, varying importance of sub-systems in an enterprise's distributed infrastructure which compete for resource in currently typical shared environment; and the best effort nature of resources such as network resources, which implies resource availability itself an issue, etc. This thesis proposes a novel business policy-driven risk-based automated availability management which uses an automated decision engine to make various availability decisions and meet business policies while optimizing overall system utility, uses utility theory to capture users' risk attitudes, and address the potentially conflicting business goals and resource demands in enterprise scale distributed systems. For the critical and complex enterprise applications, since a key contributor to application utility is the time taken to recover from failures, we develop a novel proactive fault tolerance approach, which uses online methods for failure prediction to dynamically determine the acceptable amounts of additional processing and communication resources to be used (i.e., costs) to attain certain levels of utility and acceptable delays in failure recovery. Since resource availability itself is often not guaranteed in typical shared enterprise IT environments, this thesis provides IQ-Paths with probabilistic service guarantee, to address the dynamic network behavior in realistic enterprise computing environment. The risk-based formulation is used as an effective way to link the operational guarantees expressed by utility and enforced by the PGOS algorithm with the higher level business objectives sought by end users. Together, this thesis proposes novel availability management framework and methods for large-scale enterprise applications and systems, with the goal to provide different levels of performance/availability guarantees for multiple applications and sub-systems in a complex shared distributed computing infrastructure. More specifically, this thesis addresses the following problems. For data center environments, (1) how to provide availability management for applications and systems that vary in both resource requirements and in their importance to the enterprise, based both on operational level quantities and on business level objectives; (2) how to deal with managerial policies such as risk attitude; and (3) how to deal with the tradeoff between performance and availability, given limited resources in a typical data center. Since realistic business settings extend beyond single data centers, a second set of problems addressed in this thesis concerns predictable and reliable operation in wide area settings. For such systems, we explore (4) how to provide high availability in widely distributed operational systems with low cost fault tolerance mechanisms, and (5) how to provide probabilistic service guarantees given best effort network resources.
189

Representació qualitativa asíncrona de senyals per a la supervisió de sistemes dinàmics

Colomer, Joan (Colomer Llinàs) 28 July 1998 (has links)
L'objectiu general d'aquest treball és trobar i mostrar una eina que permeti obtenir unarepresentació dels senyals procedents de sistemes dinàmics adequada a les necessitatsdels sistemes de Supervisió Experta de processos. Aquest objectiu general es pot subdividir en diverses parts, que són tractades en els diferents capítols que composen el treball i que es poden resumir en els següents punts:En primer lloc, cal conèixer les necessitats dels sistemes de Supervisió: La gran quantitat de dades que provenen dels processos fa necessari el tractament d'aquestes dades per obtenir-ne d'altres, més elaborades, amb un nivell més elevat de representació.La utilització de raonament qualitatiu, pròpia dels éssers humans, comporta la necessitat de representar simbòlicament els senyals, de traduir les dades numèriques en símbols.La Supervisió de sistemes dinàmics comporta que el temps sigui una variable fonamental, la asincronia dels esdeveniments significatius per a la Supervisió fa que les representacions més adequades i útils dels senyals siguinasíncrones. Finalment,l'ús dels coneixements experimentals en la Supervisió dels processos comporta que les representacions més naturals siguin les més útils.Aquestes necessitats fan de la representació dels senyals mitjançant episodis l'eina amb més possibilitats per assolir els objectius que es volen assolir. Per això, es presenta un formalisme que permet descriure i incloure-hi la formalització i les diferents aproximacions a aquest tipus de representació ja existents i, al mateix temps, augmentar-ne la significació a través de característiques dels senyals que noes tenen en compte en les aproximacions ja existents.El següent pas és aprofitar el nou formalisme per obtenir una nova representació amb un grau més gran de significació, cosa que s'aconsegueix representant explícitament les discontinuïtats i els períodes estacionaris o d'estabilitat, moltsignificatius en Supervisió de processos.Un problema sempre present en el tractament de senyals és el soroll que els afecta. Per aquest motiu es presenta un mètode que permet filtrar el soroll de manera que les representacions resultants quedin afectades el mínim possible per aquest tractament.Finalment, es presenta l'aplicació en línia de les eines descrites. La representació en línia dels senyals comporta el tractament de la incertesa inherent al coneixement parcial del senyal (un episodi no pot ser determinat i caracteritzat completament fins que no s'acaba). L'obtenció de resultats amb determinats graus de certesa és perfectament coherent amb la seva utilització posterior mitjançant Sistemes Expertso altres eines de la IA. Totes les aportacions del treball vénen acompanyades d'exemples i/o aplicacions que permeten observar-ne la utilitat i les limitacions.
190

PTTA: protocolo para distribuição de conteúdo em redes tolerantes ao atraso e desconexões

Albini, Fábio Luiz Pessoa 30 October 2013 (has links)
O presente trabalho consiste na proposta de um novo protocolo de transporte para redes tolerantes a atrasos e desconexões (DTN - Delay Tolerant Network) chamado PTTA - Protocolo de Transporte Tolerante a Atrasos (em inglês - DTTP - Delay Tolerant Transport Protocol). Este protocolo tem o objetivo de oferecer uma confiabilidade estatística na entrega das informações em redes deste tipo. Para isso, serão utilizados Códigos Fontanais como técnica de correção de erros. Os resultados mostram as vantagens da utilização do PTTA. Este trabalho ainda propõe um mecanismo de controle da fonte adaptável para o PTTA a fim de limitar a quantidade de dados gerados pela origem (fonte). O esquema proposto almeja aumentar a diversidade das informações codificadas sem o aumento da carga na rede. Para atingir este objetivo o intervalo de geração e o TTL (Time To Live - Tempo de vida) das mensagens serão manipulados com base em algumas métricas da rede. A fim de validar a eficiência do mecanismo proposto, diferentes cenários foram testados utilizando os principais protocolos de roteamento para DTNs. Os resultados de desempenho foram obtidos levando em consideração o tamanho do buffer, o TTL das mensagens e a quantidade de informação redundante gerada na rede. Os resultados de simulações obtidos através do simulador ONE mostram que nos cenários avaliados, o PTTA alcança um aumento na taxa de entrega das informações em um menor tempo, quando comparado com outro protocolo de transporte sem confirmação, permitindo assim um ganho de desempenho na rede. / The present work consists in the proposal of a new transport protocol for delay tolerant networks and disconnections (DTN - Delay Tolerant Network) called DTTP - Delay Tolerant Transport Protocol (in portuguese – PTTA - Protocolo de Transporte Tolerante a Atrasos). This protocol aims to provide a statistical reliability in DTNs' information delivery. For this, we use fountain codes as error correction technique. The results show the advantages of using DTTP. This work also proposes an adaptive control mechanism for the DTTP source to limit the amount of generated data. The proposed scheme aims at increasing the diversity of encoded information without increasing the load on the network. To achieve this goal the messages generation interval and TTL (Time To Live) will be handled based on some network metrics. In order to validate the efficiency of the proposed mechanism, different scenarios will be tested using the main routing protocols for DTNs. The performance results were obtained taking into account the buffer size, messages TTL and the amount of redundant information generated on the network. The simulation results, obtained through The ONE simulator, show that in the evaluated scenarios PTTA achieves an increase in the information delivery rate in a shorter time compared to other transport protocol for confirmation, thus allowing a gain in the network performance.

Page generated in 0.0791 seconds