• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 10
  • 2
  • 1
  • Tagged with
  • 33
  • 33
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evolving Future Internet clean-slate Entity Title Architecture with quality-oriented control-plane extensions / Extens?es orientadas a qualidade ao plano de controle da Arquitetura Entidade-T?tulo

Lema, Jos? Castillo 31 July 2014 (has links)
Made available in DSpace on 2014-12-17T15:48:11Z (GMT). No. of bitstreams: 1 JoseCL_DISSERT.pdf: 3900397 bytes, checksum: b91f886645164577ed2a25d0dc1d2260 (MD5) Previous issue date: 2014-07-31 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / A Internet atual vem sofrendo v?rios problemas em termos de escalabilidade, desempenho, mobilidade, etc., devido ao vertiginoso incremento no n?mero de usu?rios e o surgimento de novos servi?os com novas demandas, propiciando assim o nascimento da Internet do Futuro. Novas propostas sobre redes orientadas a conte?do, como a arquitetura Entidade Titulo (ETArch), proveem novos servi?os para este tipo de cen?rios, implementados sobre o paradigma de redes definidas por software. Contudo, o modelo de transporte do ETArch ? equivalente ao modelo best-effort da Internet atual, e vem limitando a confiabilidade das suas comunica??es. Neste trabalho, ETArch ? redesenhado seguindo o paradigma do sobreaprovisionamento de recursos para conseguir uma aloca??o de recursos avan?ada integrada com OpenFlow. Como resultado, o framework SMART (Suporte de Sess?es M?veis com Alta Demanda de Recursos de Transporte), permite que a rede defina semanticamente os requisitos qualitativos das sess?es para assim gerenciar o controle de Qualidade de Servi?o visando manter a melhor Qualidade de Experi?ncia poss?vel. A avalia??o do planos de dados e de controle teve lugar na plataforma de testes na ilha do projeto OFELIA, mostrando o suporte de aplica??es m?veis multim?dia com alta demanda de recursos de transporte com QoS e QoE garantidos atrav?s de um esquema de sinaliza??o restrito em compara??o com o ETArch legado / Current Internet has confronted quite a few problems in terms of network mobility, quality, scalability, performance, etc., mainly due to the rapid increase of the number of endusers and various new service demands, requiring new solutions to support future usage scenarios. New Future Internet approaches targeting Information Centric Networking, such as the Entity Title Architecture (ETArch), provide new services and optimizations for these scenarios, using novel mechanisms leveraging the Software Defined Networking (SDN) concept. However, ETArch approach is equivalent to the Best Effort capability of current Internet, which limits achieving reliable communications. In this work, ETArch was evolved with both quality-oriented mobility and resilience functions following the over-provisioning paradigm to achieve advanced network resource allocation integrated with OpenFlow. The resulting framework, called Support of Mobile Sessions with High Transport Network Resource Demand (SMART), allows the network to semantically define the quality requirements of each session to drive network Quality of Service control seeking to keep best Quality of Experience. The SMART evaluation in both data and control plane was carried out using a real testbed of the OFELIA Brazilian island, showing that its quality-oriented network functions allowed supporting bandwidth-intensive multimedia applications with high QoS and QoE over time through a signalling restricted scheme in comparison with the legacy ETArch
22

Confiance et incertitude dans les environnements distribués : application à la gestion des donnéeset de la qualité des sources de données dans les systèmes M2M (Machine to Machine). / Trust and uncertainty in distributed environments : application to the management of dataand data sources quality in M2M (Machine to Machine) systems.

Ravi, Mondi 19 January 2016 (has links)
La confiance et l'incertitude sont deux aspects importants des systèmes distribués. Par exemple, de multiples sources d'information peuvent fournir le même type d'information. Cela pose le problème de sélectionner la source la plus fiable et de résoudre l'incohérence dans l'information disponible. Gérer de front la confiance et l'incertitude constitue un problème complexe et nous développons à travers cette thèse, une solution pour y répondre. La confiance et l'incertitude sont intrinsèquement liés. La confiance concerne principalement les sources d'information alors que l'incertitude est une caractéristique de l'information elle-même. En l'absence de mesures de confiance et d'incertitude, un système doit généralement faire face à des problèmes tels que l'incohérence et l'incertitude. Pour aborder ce point, nous émettons l'hypothèse que les sources dont les niveaux de confiance sont élevés produiront de l'information plus fiable que les sources dont les niveaux de confiance sont inférieurs. Nous utilisons ensuite les mesures de confiance des sources pour quantifier l'incertitude dans l'information et ainsi obtenir des conclusions de plus haut niveau avec plus de certitude.Une tendance générale dans les systèmes distribués modernes consiste à intégrer des capacités de raisonnement dans les composants pour les rendre intelligents et autonomes. Nous modélisons ces composants comme des agents d'un système multi-agents. Les principales sources d'information de ces agents sont les autres agents, et ces derniers peuvent posséder des niveaux de confiance différents. De plus, l'information entrante et les croyances qui en découlent sont associées à un degré d'incertitude. Par conséquent, les agents sont confrontés à un double problème: celui de la gestion de la confiance sur les sources et celui de la présence de l'incertitude dans l'information. Nous illustrons cela avec trois domaines d'application: (i) la communauté intelligente, (ii) la collecte des déchets dans une ville intelligente, et (iii) les facilitateurs pour les systèmes de l'internet du futur (FIWARE - le projet européen n° 285248, qui a motivé la recherche sur nos travaux). La solution que nous proposons consiste à modéliser les composants de ces domaines comme des agents intelligents qui incluent un module de gestion de la confiance, un moteur d'inférence et un système de révision des croyances. Nous montrons que cet ensemble d'éléments peut aider les agents à gérer la confiance aux autres sources, à quantifier l'incertitude dans l'information et à l'utiliser pour aboutir à certaines conclusions de plus haut niveau. Nous évaluons finalement notre approche en utilisant des données à la fois simulées et réelles relatives aux différents domaines d'application. / Trust and uncertainty are two important aspects of many distributed systems. For example, multiple sources of information can be available for the same type of information. This poses the problem to select the best source that can produce the most certain information and to resolve incoherence amongst the available information. Managing trust and uncertainty together forms a complex problem and through this thesis we develop a solution to this. Trust and uncertainty have an intrinsic relationship. Trust is primarily related to sources of information while uncertainty is a characteristic of the information itself. In the absence of trust and uncertainty measures, a system generally suffers from problems like incoherence and uncertainty. To improve on this, we hypothesize that the sources with higher trust levels will produce more certain information than those with lower trust values. We then use the trust measures of the information sources to quantify uncertainty in the information and thereby infer high level conclusions with greater certainty.A general trend in the modern distributed systems is to embed reasoning capabilities in the end devices to make them smart and autonomous. We model these end devices as agents of a Multi Agent System. Major sources of beliefs for such agents are external information sources that can possess varying trust levels. Moreover, the incoming information and beliefs are associated with a degree of uncertainty. Hence, the agents face two-fold problems of managing trust on sources and presence of uncertainty in the information. We illustrate this with three application domains: (i) The intelligent community, (ii) Smart city garbage collection, and (iii) FIWARE : a European project about the Future Internet that motivated the research on this topic. Our solution to the problem involves modelling the devices (or entities) of these domains as intelligent agents that comprise a trust management module, an inference engine and a belief revision system. We show that this set of components can help agents to manage trust on the other sources and quantify uncertainty in the information and then use this to infer more certain high level conclusions. We finally assess our approach using simulated and real data pertaining to the different application domains.
23

6G wireless communication systems: applications, opportunities and challenges

Anoh, K., See, C.H., Dama, Y., Abd-Alhameed, Raed, Keates, S. 26 December 2022 (has links)
Yes / As the technical specifications of the 5th Generation (5G) wireless communication standard are being wrapped up, there are growing efforts amongst researchers, industrialists, and standardisation bodies on the enabling technologies of a 6G standard or the so-called Beyond 5G (B5G) one. Although the 5G standard has presented several benefits, there are still some limitations within it. Such limitations have motivated the setting up of study groups to determine suitable technologies that should operate in the year 2030 and beyond, i.e., after 5G. Consequently, this Special Issue of Future Internet concerning what possibilities lie ahead for a 6G wireless network includes four high-quality research papers (three of which are review papers with over 412 referred sources and one regular research). This editorial piece summarises the major contributions of the articles and the Special Issue, outlining future directions for new research.
24

Interoperabilité à large échelle dans le contexte de l'Internet du future / Large scale interoperability in the context of Future Internet

Rodrigues, Preston 27 May 2013 (has links)
La croissance de l’Internet en tant que plateforme d’approvisionnement à grande échelled’approvisionnement de contenus multimédia a été une grande success story du 21e siécle.Toutefois, les applications multimédia, avec les charactéristiques spécifiques de leur trafic ainsique les les exigences des nouveaux services, posent un défi intéressant en termes de découverte,de mobilité et de gestion. En outre, le récent élan de l’Internet des objets a rendu très nécessairela revitalisation de la recherche pour intégrer des sources hétérogènes d’information à travers desréseaux divers. Dans cet objectif, les contributions de cette thèse essayent de trouver un équilibreentre l’hétérogénéité et l’interopérabilité, pour découvrir et intégrer les sources hétérogènesd’information dans le contexte de l’Internet du Futur.La découverte de sources d’information sur différents réseaux requiert une compréhensionapprofondie de la façon dont l’information est structurée et quelles méthodes spécifiques sontutilisés pour communiquer. Ce processus a été régulé à l’aide de protocoles de découverte.Cependant, les protocoles s’appuient sur différentes techniques et sont conçues en prenant encompte l’infrastructure réseau sous-jacente, limitant ainsi leur capacité à franchir la limite d’unréseau donné. Pour résoudre ce problème, le première contribution dans cette thèse tente detrouver une solution équilibrée permettant aux protocoles de découverte d’interagir les uns avecles autres, tout en fournissant les moyens nécessaires pour franchir les frontières entre réseaux.Dans cet objectif, nous proposons ZigZag, un middleware pour réutiliser et étendre les protocolesde découverte courants, conçus pour des réseaux locaux, afin de découvrir des servicesdisponibles dans le large. Notre approche est basée sur la conversion de protocole permettant ladécouverte de service indépendamment de leur protocole de découverte sous-jacent. Toutefois,dans les réaux de grande échelle orientée consommateur, la quantité des messages de découvertepourrait rendre le réseau inutilisable. Pour parer à cette éventualité, ZigZag utilise le conceptd’agrégation au cours du processus de découverte. Grâce à l’agrégation, ZigZag est capabled’intégrer plusieurs réponses de différentes sources supportant différents protocoles de découverte.En outre, la personnalisation du processus d’agrégation afin de s’aligner sur ses besoins,requiert une compréhension approfondie des fondamentaux de ZigZag. À cette fin, nous proposonsune seconde contribution: un langage flexible pour aider à définir les politiques d’unemanière propre et efficace. / The growth of the Internet as a large scale media provisioning platform has been a great successstory of the 21st century. However, multimedia applications, with their specific traffic characteristicsand novel service requirements, pose an interesting challenge in terms of discovery,mobility and management. Furthermore, the recent impetus to Internet of things has made it verynecessary, to revitalize research in order to integrate heterogeneous information sources acrossnetworks. Towards this objective, the contributions in this thesis, try to find a balance betweenheterogeneity and interoperability, to discovery and integrate heterogeneous information sourcesin the context of Future Internet.Discovering information sources across networks need a through understanding of how theinformation is structured and what specific methods they follow to communicate. This processhas been regulated with the help of discovery protocols. However, protocols rely on differenttechniques and are designed taking the underlying network infrastructure into account. Thus,limiting the capability of some protocols to cross network boundary. To address this issue, thefirst contribution in this thesis tries to find a balanced solution to enable discovery protocols tointeroperate with each other as well as provide the necessary means to cross network boundaries.Towards this objective, we propose ZigZag, a middleware to reuse and extend current discoveryprotocols, designed for local networks, to discover available services in the large. Our approachis based on protocol translation to enable service discovery irrespectively of their underlyingdiscovery protocol. Although, our approach provides a step forward towards interoperability inthe large. We needed to make sure that discovery messages do not create a bottleneck for thenetwork.In large scale consumer oriented network, service discovery messages could render the networkunusable. To counter this, ZigZag uses the concept of aggregation during the discoveryprocess. Using aggregation ZigZag is able to integrate several replies from different servicesources supporting different discovery protocols. However, to customize the aggregation processto suit once needs, requires a through understanding of ZigZag fundamentals. To this end, wepropose our second contribution, a flexible policy language that can help define policies in aclean and effective way. In addition, the policy language has some added advantages in terms ofdynamic management. It provides features like delegation, runtime time policy management andlogging. We tested our approach with the help of simulations, the results showed that ZigZag canboth reduce the number of messages that flow through the network, and provide value sensitiveinformation to the requesting entity.Although, ZigZag is designed to discover media services in the large. It can very well be usedin other domains like home automation and smart spaces. While, the flexible pluggable modulardesign of the policy language enables it to be used in other applications like for instance, e-mail.
25

Gerenciamento autônomo de redes na Internet do futuro

Queiróz, Alexandre Passito de 04 December 2012 (has links)
Made available in DSpace on 2015-04-29T15:10:48Z (GMT). No. of bitstreams: 1 ALEXANDRE PASSITO.pdf: 3822416 bytes, checksum: 4f278e2830ed590e916983c979c90872 (MD5) Previous issue date: 2012-12-04 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Autonomous networking research to applies intelligent agent and multiagent systems theory to network controlling mechanisms. Deploying such autonomous and rational entities in the network can improve its behavior in the presence of very dynamic and complex control scenarios. Unfortunately, building agent-based mechanisms for networks is not an easy task. The main difficulty is to create concise knowledge representations about network domains and reasoning mechanisms to deal with them. Furthermore, the Internet makes the design of multiagent systems for network controlling a challenging activity involving the modeling of different participants with diverse beliefs and intentions. Such type of system often poses scalability problems due to the lack of incentives for cooperation between administrative domains. Finally, as the current structure of the Internet often prevents innovation, constructed autonomous networking mechanisms are not fully deployed in large scale scenarios. The Software-Defined Networking (SDN) paradigm is in the realm of Future Internet efforts. In the SDN paradigm, packet forwarding hardware is controlled by software running as a separated control plane. Management software uses an open protocol to program the owtables in different switches and routers. This work presents a general discussion about the integration of autonomous networks and software-defined networks. Based on the knowledge offered by this discussion, it presents a framework that provides autonomy to SDN domains allowing them to act cooperatively when deployed in scenarios with distributed management. Two case studies are presented for important open issues in the Internet: (1) the problem of mitigating DDoS attacks when thousands of attackers perform malicious packet ooding and SDN domains must cooperate to cope with packet filtering at the source; (2) the problem of network traffic management when multiple domains must cooperate and modify routing primitives. / A pesquisa em redes autônomas aplica a teoria de agentes inteligentes e sistemas multiagente em mecanismos de controle de redes. Implantar esse mecanismos autônomos e racionais na rede pode melhorar seu comportamento na presença de cenários de controle muito complexos e dinâmicos. Infelizmente, a construção de mecanismos baseados em agentes para redes não é uma tarefa fácil. A principal dificuldade é criar representações concisas de conhecimento sobre os domínios de redes e mecanismos de raciocínio para lidar com elas. Além disso, a Internet faz com que o projeto de sistemas multiagente para o controle da rede seja uma atividade intrincada envolvendo a modelagem de diferentes participantes com diversas crenças e intenções. Esses tipos de sistemas geralmente apresentam problemas de escalabilidade devido à falta de incentivos para cooperação entre domínios administrativos. Finalmente, como a estrutura corrente da Internet geralmente impede inovações, mecanismos de redes autônomas construídos não são totalmente implantados em cenários de larga escala. O paradigma das redes definidas por software (SDN) está na esfera dos esforços da Internet do Futuro. No paradigma SDN, o hardware de repasse de pacotes é controlado por software sendo executado como um plano de controle separado. Softwares de gerenciamento utilizam um protocolo aberto que programa as tabelas de uxo em diferentes switches e roteadores. Este trabalho apresenta uma discussão geral sobre a integração de redes autônomas e redes definidas por software. Baseado no conhecimento oferecido por essa discussão, é apresentado um arcabouço que provê autonomia para domínios SDN, permitindo que eles atuem cooperativamente quando implantados em cenários com gerenciamento distribuído. Dois estudos de caso são apresentados para importantes questões em aberto na Internet: (1) o problema da mitigação de ataques DDoS quando milhares de atacantes realizam inundação por pacotes e os domínios SDN precisam cooperar para lidar com o filtro de pacotes na origem; (2) o problema do gerenciamento de tráfego da rede quando múltiplos domínios devem cooperar e realizar modificações nas primitivas de roteamento de redes.
26

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi 25 February 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
27

Etarch: projeto e desenvolvimento de uma arquitetura para o modelo de título com foco na agregação de tráfego Multicast

Gonçalves, Maurício Amaral 26 September 2014 (has links)
The original design of the Internet was started over forty years ago, in a totally different context of today s. At that time, the network gained new purposes and began to be used in areas and activities that would have been unthinkable during its design. New applications based on networks usage brought a new set of requirements, most of whom were not adequately met due to limitations in architecture. Although the original specification of the Internet has an important role in its popularization, today it represents the main limiter of its evolution, which fosters the thought that the architecture should be reviewed in a clean slate approach. This strategy encourages innovation in the proposals for future networks, by not submitting them to the limitations of the current architecture, and free researchers from the problem of supporting legacy networks. In this context, the Entity Title Model represents a revolutionary way to semantically understand the new Internet requirements, also managing the communication entities and their capabilities, in order to define and implement the best strategies for the treatment of communication. The materialization of this model is performed by Entity Title Architecture, a new flexible architecture that proposes a rereading of important aspects of computer networks, particularly in strategies for addressing and routing.This work proposes an implementation of this architecture through a prototype based on the Open Flow specification, and a practical application with the multicast communication requirement. The proposed approach is able to provide the multicast service efficiently, and with an appropriate solution at the network layer, which is naturally supported by the architecture. Are also presented in this paper the results of some comparative experiments with a video application, first implemented using the TCP/IP architecture with unicast and multicast services, and then, using the Entity Title Architecture, focusing on traffic aggregation through multicast. The results showed that the bandwidth tests with the proposed approach remains constant, while in TCP/IP approach with unicast services, it grows linearly, proportional to the number of connected client. On TCP/IP approach with multicast services, the pattern of bandwidth consumption is similar, however, the approach Entity Title Architecture has won by: decreasing the unnecessary overhead in communication, and thus using less bandwidth; providing better strategies for the control plane, by separating the data plane; and improving the multicast addressability, based on the use of a unique designation, unambiguous and independent of topology; and finally, by presenting a proposal for deployment in real network, because of the Openflow broad support by leading equipment suppliers. / O projeto original da Internet foi iniciado há mais de quarenta anos, em um contexto totalmente diferente do atual. Nesse tempo, a rede ganhou novos propósitos e passou a ser utilizada em áreas e atividades que seriam impensáveis durante a sua concepção. As novas aplicações às quais a rede foi submetida trouxeram consigo diversos novos requisitos, que em sua maioria não foram adequadamente atendidos devido a limitações na arquitetura. Embora a especificação original da Internet tenha um importante papel na sua popularização, hoje ela atua como principal limitador de sua evolução, o que fundamenta a visão de que a arquitetura deva ser revista em uma abordagem clean slate. Essa estratégia incentiva à inovação nas propostas para as redes futuras, por não submetê-las às limitações da arquitetura atual, e por libertar os pesquisadores do problema de suporte à rede legada. Neste contexto, o Modelo de Título representa uma forma revolucionária de entender semanticamente os novos requisitos da Internet, observando também as entidades da comunicação e suas capacidades, de maneira a definir e implementar as melhores estratégias para o tratamento da comunicação. A materialização desse modelo é realizada pela Entity Title Architecture, uma nova e flexível arquitetura que propõe uma releitura de importantes aspectos das redes de computadores, sobretudo das estratégias de endereçamento e roteamento. Este trabalho propõe uma implementação dessa arquitetura através de um protótipo baseado na especificação Open Flow, e de uma aplicação prática como requisito de comunicação multicast. A abordagem proposta é capaz de fornecer o serviço de multicast de forma eficiente, e comum à solução adequada na camada de rede, o que é suportado naturalmente pela arquitetura. Neste trabalho são apresentados também os resultados de alguns experimentos comparativos, comum a aplicação de vídeo, primeiro implementado utilizando a arquitetura TCP/IP com os serviços unicast e multicast, e depois, utilizando a arquitetura Entity Title Architecture com foco na agregação de tráfego através de multicast. Os resultados demonstraram que o consumo de banda nos testes com a abordagem proposta permanece constante, enquanto na abordagem TCP/IP com serviços unicast, ela cresce de forma linear, proporcional ao número de clientes conectados. Já na abordagem TCP/IP com serviços multicast, o padrão de consumo de largura de banda é similar, no entanto, a abordagem Entity Title Architecture apresenta ganhos por diminuir o overhead desnecessário na comunicação, e dessa maneira, por utilizar uma largura de banda menor; por fornecer melhores estratégias para o plano de controle, através da separação do plano de dados; por melhorar a capacidade de endereçamento do grupo multicast, baseando-se na utilização de uma nova designação única, não ambígua e independente de topologia; e por fim, por apresentar uma proposta real de implantação na rede, devido ao crescente suporte ao protocolo Open Flow, promovido pelos principais fabricantes de equipamentos. / Mestre em Ciência da Computação
28

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi January 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
29

AN EVALUATION OF SDN AND NFV SUPPORT FOR PARALLEL, ALTERNATIVE PROTOCOL STACK OPERATIONS IN FUTURE INTERNETS

Suresh, Bhushan 09 July 2018 (has links)
Virtualization on top of high-performance servers has enabled the virtualization of network functions like caching, deep packet inspection, etc. Such Network Function Virtualization (NFV) is used to dynamically adapt to changes in network traffic and application popularity. We demonstrate how the combination of Software Defined Networking (SDN) and NFV can support the parallel operation of different Internet architectures on top of the same physical hardware. We introduce our architecture for this approach in an actual test setup, using CloudLab resources. We start of our evaluation in a small setup where we evaluate the feasibility of the SDN and NFV architecture and incrementally increase the complexity of the setup to run a live video streaming application. We use two vastly different protocol stacks, namely TCP/IP and NDN to demonstrate the capability of our approach. The evaluation of our approach shows that it introduces a new level of flexibility when it comes to operation of different Internet architectures on top of the same physical network and with this flexibility provides the ability to switch between the two protocol stacks depending on the application.
30

Fridays for Future and Mondays for Memes: How Climate Crisis Memes Mobilize Social Media Users

Johann, Michael, Höhnle, Lukas, Dombrowski, Jana 25 August 2023 (has links)
Modern protest movements rely on digital activism on social media, which serves as a conduit for mobilization. In the social media landscape, internet memes have emerged as a popular practice of expressing political protest. Although it is known that social media facilitates mobilization, researchers have neglected how distinct types of content affect mobilization. Moreover, research regarding users’ perspectives on mobilization through memes is lacking. To close these research gaps, this study investigates memes in the context of climate protest mobilization. Based on the four-step model of mobilization, a survey of users who create and share memes related to the Fridays for Future movement on social media (N = 325) revealed that the prosumption of climate crisis memes increases users’ issue involvement and strengthens their online networks. These factors serve as crucial mediators in the relationship between users’ prosumption of climate crisis memes and political participation. The results suggest that mobilization through memes is effective at raising awareness of political issues and strengthening online discussion networks, which means that it has strategic potential for protest movements. By looking at memes from the perspective of their creators and examining a specific type of social media content, this study contributes to the literature on digital mobilization.

Page generated in 0.1076 seconds