101 |
Uma arquitetura baseada em políticas para o provimento de QoS utilizando princípios de Autonomic Computing / A policy-based architecture for QoS provisioning using autonomic computing principlesFranco, Theo Ferreira January 2008 (has links)
Sistemas corporativos modernos cada vez mais dependentes da rede e a integração de serviços entorno do modelo TCP/IP elevam a exigência de Qualidade de Serviço da infraestrutura de TI. Neste cenário, o dinamismo das redes atuais em conjunto com os novos requisitos de QoS exigem que a infra-estrutura de TI seja mais autônoma e confiável. Para tratar esta questão, o modelo de Gerenciamento de Redes Baseado em Políticas, proposto pelo IETF, vem se consolidando como uma abordagem para controlar o comportamento da rede através do controle das configurações dos seus dispositivos. Porém, o foco deste modelo é o gerenciamento de políticas internas a um domínio administrativo. Esta característica faz com que o modelo possua algumas limitações, tais como a incapacidade de estabelecer qualquer tipo de coordenação entre diferentes PDPs e a impossibilidade de reagir a eventos externos. Visando agregar autonomia ao modelo de gerenciamento baseado em políticas, este trabalho propõe uma arquitetura em camadas que empregue os conceitos de Autonomic Computing relacionados a: i) adaptação dinâmica dos recursos gerenciados em resposta às mudanças no ambiente, ii) integração com sistemas de gerenciamento de outros domínios através do recebimento de notificações destes, iii) capacidade de planejar ações de gerenciamento e iv) promoção de ações de gerenciamento que envolvam mais de um domínio administrativo, estabelecendo uma espécie de coordenação entre PDPs. Para a implementação destes conceitos, a arquitetura prevê o uso de uma camada peerto- peer (P2P) sobre a plataforma de políticas. Desta forma, a partir de uma notificação recebida, a camada P2P planeja ações visando adaptar o comportamento da rede aos eventos ocorridos na infra-estrutura de TI. As ações planejadas traduzem-se em inclusões ou remoções de políticas da plataforma de políticas responsável por gerenciar a configuração dos dispositivos de rede. Para notificações que envolvam recursos de mais de um domínio administrativo, os peers de gerenciamento agem de forma coordenada para implantar as devidas ações em cada domínio. A arquitetura proposta foi projetada com foco em prover QoS em uma rede com suporte à DiffServ, embora acredite-se que a sua estrutura seja genérica o bastante para ser aplicada a outros contextos. Como estudo de caso, foi analisado o emprego da arquitetura em resposta a eventos gerados por uma grade computacional. Foi elaborado ainda um protótipo da arquitetura utilizando o Globus Toolkit 4 como fonte de eventos. / Modern corporative systems becoming more dependent of the network and the integration of services around the TCP/IP model increase the requirement of Quality of Service (QoS) of the IT infrastructure. In this scene, the dynamism of current networks together with the new requirements of QoS demands a more autonomous and reliable IT infrastructure. To address this issue, the model of Police Based Network Management, proposed by IETF, has been consolidated as an approach to control the behavior of the network through the control of the configurations of its devices. However, the focus of this model is the management of the policies internal to an administrative domain. This feature brings some limitations to the model, such as the incapacity to establish any kind of coordination between different PDPs and the impossibility to react to external events. Aiming at to add autonomy to the model of Policy Based Network Management, this work proposes a layered architecture based on the concepts of Autonomic Computing related to: i) the dynamic adaptation of the managed resources in response to changes in the environment, ii) integration with management systems of other domains through the reception of notifications of these systems, iii) ability of planning the management actions and iv) execution of multi-domain management actions, establishing a kind of coordination between PDPs. To implement these concepts, the architecture was designed with a peer-to-peer layer above the policy platform. Thus, from a received notification, the P2P layer plans actions aiming to adapt the network behavior in response to the events occurred in the IT infrastructure. The planned actions are, actually, inclusions or removals of policies in the policy platform responsible for the management of the network devices configuration. For notifications related with resources of more than one administrative domain, the management peers act in a coordinated way in order to establish the suitable actions in each domain. The proposed architecture was designed with focus in providing QoS in a network with support to DiffServ, although we believe that its structure is generic enough to be applied to other contexts. As case study, it was analyzed the use of the architecture in response to events generated by a computational grid. Additionally, a prototype of the architecture was build making use of Globus Toolkit 4 as an event source.
|
102 |
An effective approach for network management based on situation management and mashupsRendon, Oscar Mauricio Caicedo January 2015 (has links)
The Situation Management discipline is intended to address situations happening or that might happen in dynamic systems. In this way, this discipline supports the provisioning of solutions that enable analyzing, correlating, and coordinating interactions among people, information, technologies, and actions targeted to overcome situations. Over recent years, the Situation Management has been employed in diverse domains ranging from disaster response to public health. Notwithstanding, up to now, it has not been used to deal with unexpected, dynamic, and heterogeneous situations that network administrators face in their daily work; in this thesis, these situations are referred to as network management situations. The mashup technology also allows creating solutions, named mashups, aimed to cope with situations. Mashups are composite Web applications built up by end-users through the combination of Web resources available along the Internet. These composite Web applications have been useful to manage situations in several domains ranging from telecommunication services to water floods. In particular, in the network management domain, the mashup technology has been used to accomplish specific tasks, such as botnet detection and the visualization of traffic of the border gateway protocol. In the network management domain, large research efforts have been made to automate and facilitate the management tasks. However, so far, none of these efforts has carried out network management by means of the Situation Management and the mashup technology. Thus, the goal of this thesis is to investigate the feasibility on using the Situation Management and mashups as an effective (in terms of complexity, consuming of time, traffic, and time of response) approach for network management. To achieve the raised goal, this thesis introduces an approach formed by mashments (special mashups devised for coping with network management situations), the Mashment Ecosystem, the process to develop and launch mashments, the Mashment System Architecture, and the Mashment Maker. An extensive analysis of the approach was conducted on networks based on the Software-Defined Networking paradigm and virtual nodes. The results of analysis have provided directions and evidences that corroborate the feasibility of using the Situation Management and mashups as an effective approach for network management.
|
103 |
Um serviço de self-healing baseado em P2P para manutenção de redes de computadores / A P2P based self-healing service for coputer networks maintenanceDuarte, Pedro Arthur Pinheiro Rosa January 2015 (has links)
Observou-se nos últimos anos um grande aumetno na complexidade das redes. Surgiram também novos desa os para gerenciamento dessas redes. A dimensão atual e as tendências de crescimento das infraestruturas tem inviabilizado as técnicas de gerencimento de redes atuais, baseadas na intervenção humana. Por exemplo, a heterogeneidade dos elementos gerenciados obrigam que administradores e gerentes lidem com especi cidades de implanta ção que vão além dos objetivos gerenciais. Considerando as áreas funcionais da gerência de redes, a gerência de falhas apresenta impactos operacionais interessantes. Estima-se que 33% dos custos operacionais estão relacionados com a prevenção e recuperação de falhas e que aproximadamente 44% desse custo visa à resolução de problemas causados por erros humanos. Dentre as abordagens de gerência de falhas, o Self-Healing objetiva minimizar as interações humanas nas rotinas de gerenciamento de falhas, diminuindo dessa forma erros e demandas operacionais. Algumas propostas sugerem que o Self-Healing seja planejado no momento do projeto das aplicações. Tais propostas são inviáveis de aplicação em sistemas legados. Otras pesquisas sugerem à análise e instrumentação das aplicações em tempo de execução. Embora aplicáveis a sistemas legados, análise e instrumentação em tempo de execução estão fortemente acopladas as tecnologias e detalhes de implementação das aplicações. Por esse motivo, é difícil aplicar tais propostas, por exemplo, em um ambiente de rede que abrange muitas entidades gerenciadas implantadas através de diferentes tecnologias. Porém, parece plausível oferecer aos adminitradores e gerentes facilidades através das quais eles possam expressar seus conhecimentos sobre anoamlias e falhas de aplicações, bem como mecanismos através dos quais esses conhecimentos possam ser utilizado no gerenciamento de sistemas. Essa dissertação de mestrado tem como objetivo apresentar e avaliar uma solução comum que introduza nas redes capacidades de self-healing. A solu- ção apresentada utiliza-se de workplans para capturar o conhecimento dos administradores em como diagnosticar e recuperar anomalias e falhas em redes. Além disso, o projeto e implementação de um framework padrão para detecção e noti cação de falhas é discutido no âmbito de um sistema de gerenciamento baseado em P2P. Por último, uma avaliação experimental clari ca a viabilidade do ponto de vista operacional. / In recent years, a huge raise in networks' complexity was witnessed. Along the raise in complexity, many management challenges also arose. For instance, managed entities' heterogeneity demands administrators and managers to deal with cumbersome implementation and deployment speci cities. Moreover, infrastructures' current size and growth-trends show that it is becoming infeasible to rely on human-in-the-loop management techniques. Inside the problem domain of network management, Fault Management is appealing because of its impact in operational costs. Researches estimate that more than 33% of operational costs are related to preventing and recovering faults, where about 40% of this investment is directed to solve human-caused operational errors. Hence, addressing human interaction is mostly unarguably a need. Among di erent approaches, Self-Healing, a property of Autonomic Network Management's proposal, targets to avoid humans' interactions and decisions on Fault Management loops, thereupon unburden administrators and managers from performing Fault Management-related tasks. Some researches on Self-Healing enabling approaches suppose that Fault Management capabilities should be planned in design-time. These approaches are impossible to apply on legacy systems. Other researches suggest runtime analysis and instrumentation of applications' bytecode. Albeit applicable to some legacy systems, these last proposals are tightly-coupled to implementation's issues of underlaying technologies. For this reason, it is hard to apply such proposals end-toend, for example, in a scenario encompassing many managed entities implemented through di erent technologies. However, it is possible to o er to administrators and managers facilities to express they knowledge about networks' anomalies and faults, and facilities to leverage this knowledge. This master dissertation has as objective to present and evaluate a solution to imbue network management systems with self-healing capabilities. The solution relies on workplans as a mean to gather administrators and managers' knowledge on how to diagnose and heal networks' anomalies and faults. Besides that, the design and implementation of a standard framework for fault detection and noti cation customization is discussed while considering a P2P-Based Network Management System as its foundations. At last, an experimental evaluation renders clear the proposal's feasibility from the operational point of view.
|
104 |
Um serviço de self-healing baseado em P2P para manutenção de redes de computadores / A P2P based self-healing service for coputer networks maintenanceDuarte, Pedro Arthur Pinheiro Rosa January 2015 (has links)
Observou-se nos últimos anos um grande aumetno na complexidade das redes. Surgiram também novos desa os para gerenciamento dessas redes. A dimensão atual e as tendências de crescimento das infraestruturas tem inviabilizado as técnicas de gerencimento de redes atuais, baseadas na intervenção humana. Por exemplo, a heterogeneidade dos elementos gerenciados obrigam que administradores e gerentes lidem com especi cidades de implanta ção que vão além dos objetivos gerenciais. Considerando as áreas funcionais da gerência de redes, a gerência de falhas apresenta impactos operacionais interessantes. Estima-se que 33% dos custos operacionais estão relacionados com a prevenção e recuperação de falhas e que aproximadamente 44% desse custo visa à resolução de problemas causados por erros humanos. Dentre as abordagens de gerência de falhas, o Self-Healing objetiva minimizar as interações humanas nas rotinas de gerenciamento de falhas, diminuindo dessa forma erros e demandas operacionais. Algumas propostas sugerem que o Self-Healing seja planejado no momento do projeto das aplicações. Tais propostas são inviáveis de aplicação em sistemas legados. Otras pesquisas sugerem à análise e instrumentação das aplicações em tempo de execução. Embora aplicáveis a sistemas legados, análise e instrumentação em tempo de execução estão fortemente acopladas as tecnologias e detalhes de implementação das aplicações. Por esse motivo, é difícil aplicar tais propostas, por exemplo, em um ambiente de rede que abrange muitas entidades gerenciadas implantadas através de diferentes tecnologias. Porém, parece plausível oferecer aos adminitradores e gerentes facilidades através das quais eles possam expressar seus conhecimentos sobre anoamlias e falhas de aplicações, bem como mecanismos através dos quais esses conhecimentos possam ser utilizado no gerenciamento de sistemas. Essa dissertação de mestrado tem como objetivo apresentar e avaliar uma solução comum que introduza nas redes capacidades de self-healing. A solu- ção apresentada utiliza-se de workplans para capturar o conhecimento dos administradores em como diagnosticar e recuperar anomalias e falhas em redes. Além disso, o projeto e implementação de um framework padrão para detecção e noti cação de falhas é discutido no âmbito de um sistema de gerenciamento baseado em P2P. Por último, uma avaliação experimental clari ca a viabilidade do ponto de vista operacional. / In recent years, a huge raise in networks' complexity was witnessed. Along the raise in complexity, many management challenges also arose. For instance, managed entities' heterogeneity demands administrators and managers to deal with cumbersome implementation and deployment speci cities. Moreover, infrastructures' current size and growth-trends show that it is becoming infeasible to rely on human-in-the-loop management techniques. Inside the problem domain of network management, Fault Management is appealing because of its impact in operational costs. Researches estimate that more than 33% of operational costs are related to preventing and recovering faults, where about 40% of this investment is directed to solve human-caused operational errors. Hence, addressing human interaction is mostly unarguably a need. Among di erent approaches, Self-Healing, a property of Autonomic Network Management's proposal, targets to avoid humans' interactions and decisions on Fault Management loops, thereupon unburden administrators and managers from performing Fault Management-related tasks. Some researches on Self-Healing enabling approaches suppose that Fault Management capabilities should be planned in design-time. These approaches are impossible to apply on legacy systems. Other researches suggest runtime analysis and instrumentation of applications' bytecode. Albeit applicable to some legacy systems, these last proposals are tightly-coupled to implementation's issues of underlaying technologies. For this reason, it is hard to apply such proposals end-toend, for example, in a scenario encompassing many managed entities implemented through di erent technologies. However, it is possible to o er to administrators and managers facilities to express they knowledge about networks' anomalies and faults, and facilities to leverage this knowledge. This master dissertation has as objective to present and evaluate a solution to imbue network management systems with self-healing capabilities. The solution relies on workplans as a mean to gather administrators and managers' knowledge on how to diagnose and heal networks' anomalies and faults. Besides that, the design and implementation of a standard framework for fault detection and noti cation customization is discussed while considering a P2P-Based Network Management System as its foundations. At last, an experimental evaluation renders clear the proposal's feasibility from the operational point of view.
|
105 |
Uma arquitetura baseada em políticas para o provimento de QoS utilizando princípios de Autonomic Computing / A policy-based architecture for QoS provisioning using autonomic computing principlesFranco, Theo Ferreira January 2008 (has links)
Sistemas corporativos modernos cada vez mais dependentes da rede e a integração de serviços entorno do modelo TCP/IP elevam a exigência de Qualidade de Serviço da infraestrutura de TI. Neste cenário, o dinamismo das redes atuais em conjunto com os novos requisitos de QoS exigem que a infra-estrutura de TI seja mais autônoma e confiável. Para tratar esta questão, o modelo de Gerenciamento de Redes Baseado em Políticas, proposto pelo IETF, vem se consolidando como uma abordagem para controlar o comportamento da rede através do controle das configurações dos seus dispositivos. Porém, o foco deste modelo é o gerenciamento de políticas internas a um domínio administrativo. Esta característica faz com que o modelo possua algumas limitações, tais como a incapacidade de estabelecer qualquer tipo de coordenação entre diferentes PDPs e a impossibilidade de reagir a eventos externos. Visando agregar autonomia ao modelo de gerenciamento baseado em políticas, este trabalho propõe uma arquitetura em camadas que empregue os conceitos de Autonomic Computing relacionados a: i) adaptação dinâmica dos recursos gerenciados em resposta às mudanças no ambiente, ii) integração com sistemas de gerenciamento de outros domínios através do recebimento de notificações destes, iii) capacidade de planejar ações de gerenciamento e iv) promoção de ações de gerenciamento que envolvam mais de um domínio administrativo, estabelecendo uma espécie de coordenação entre PDPs. Para a implementação destes conceitos, a arquitetura prevê o uso de uma camada peerto- peer (P2P) sobre a plataforma de políticas. Desta forma, a partir de uma notificação recebida, a camada P2P planeja ações visando adaptar o comportamento da rede aos eventos ocorridos na infra-estrutura de TI. As ações planejadas traduzem-se em inclusões ou remoções de políticas da plataforma de políticas responsável por gerenciar a configuração dos dispositivos de rede. Para notificações que envolvam recursos de mais de um domínio administrativo, os peers de gerenciamento agem de forma coordenada para implantar as devidas ações em cada domínio. A arquitetura proposta foi projetada com foco em prover QoS em uma rede com suporte à DiffServ, embora acredite-se que a sua estrutura seja genérica o bastante para ser aplicada a outros contextos. Como estudo de caso, foi analisado o emprego da arquitetura em resposta a eventos gerados por uma grade computacional. Foi elaborado ainda um protótipo da arquitetura utilizando o Globus Toolkit 4 como fonte de eventos. / Modern corporative systems becoming more dependent of the network and the integration of services around the TCP/IP model increase the requirement of Quality of Service (QoS) of the IT infrastructure. In this scene, the dynamism of current networks together with the new requirements of QoS demands a more autonomous and reliable IT infrastructure. To address this issue, the model of Police Based Network Management, proposed by IETF, has been consolidated as an approach to control the behavior of the network through the control of the configurations of its devices. However, the focus of this model is the management of the policies internal to an administrative domain. This feature brings some limitations to the model, such as the incapacity to establish any kind of coordination between different PDPs and the impossibility to react to external events. Aiming at to add autonomy to the model of Policy Based Network Management, this work proposes a layered architecture based on the concepts of Autonomic Computing related to: i) the dynamic adaptation of the managed resources in response to changes in the environment, ii) integration with management systems of other domains through the reception of notifications of these systems, iii) ability of planning the management actions and iv) execution of multi-domain management actions, establishing a kind of coordination between PDPs. To implement these concepts, the architecture was designed with a peer-to-peer layer above the policy platform. Thus, from a received notification, the P2P layer plans actions aiming to adapt the network behavior in response to the events occurred in the IT infrastructure. The planned actions are, actually, inclusions or removals of policies in the policy platform responsible for the management of the network devices configuration. For notifications related with resources of more than one administrative domain, the management peers act in a coordinated way in order to establish the suitable actions in each domain. The proposed architecture was designed with focus in providing QoS in a network with support to DiffServ, although we believe that its structure is generic enough to be applied to other contexts. As case study, it was analyzed the use of the architecture in response to events generated by a computational grid. Additionally, a prototype of the architecture was build making use of Globus Toolkit 4 as an event source.
|
106 |
An Efficient Network Management System using Agents for MANETsChannappagoudar, Mallikarjun B January 2017 (has links) (PDF)
Network management plays a vital role to keep a network and its application work e ciently. The network management in MANETs is a crucial and the challenging task, as these networks are characterized by dynamic environment and the scarcity of resources. There are various existing approaches for network management in MANETs.
The Ad hoc Network Management Protocol (ANMP) has been one of the rst e orts and introduced an SNMP-based solution for MANETs. An alternative SNMP-based solu-tion is proposed by GUERRILLA Management Architecture (GMA). Due to self-organizing characteristic feature of MANETs, the management task has to be distributed. Policy-based network management relatively o ers this feature, by executing and applying policies pre-viously de ned by network manager. Otherwise, the complexity of realization and control becomes di cult
Most of the works address the current status of the MANET to take the network man-agement decisions. Currently, MANETs addresses the dynamic and intelligent decisions by considering the present situation and all related history information of nodes into consid-eration. In this connection we have proposed a network management system using agents (NMSA) for MANETs, resolving major issues like, node monitoring, location management, resource management and QoS management. Solutions to these issues are discussed as inde-pendent protocols, and are nally combined into a single network management system, i.e., NMSA.
Agents are autonomous, problem-solving computational entities capable of performing e ective operation in dynamic environments. Agents have cooperation, intelligence, and mobility characteristics as advantages. The agent platforms provide the di erent services to agents, like execution, mobility, communication, security, tracking, persistence and directory etc. The platform execution environment allows the agents to run, and mobility service allows them to travel among the di erent execution environments. The entire management task will be delegated to agents, which then executes the management logic in a distributed and autonomous fashion. In our work we used the static and mobile agents to nd some solutions to the management issues in a MANET.
We have proposed a node monitoring protocol for MANETs, which uses both static agent (SA) and mobile agents (MA), to monitor the nodes status in the network. It monitors the gradational energy loss, bu er, bandwidth, and the mobility of nodes running with low to high load of mobile applications. Protocol assumes the MANET is divided into zones and sectors. The functioning of the protocol is divided into two segments, The NMP main segment, which runs at the chosen resource rich node (RRN) at the center of a MANET, makes use of SA which resides at same RRN, and the NMP subsegment which runs in the migrated MAs at the other nodes. Initially SA creates MAs and dispatches one MA to each zone, in order to monitor health conditions and mobility of nodes of the network. MAs carrying NMP subsegment migrates into the sector of a respective zone, and monitors the resources such as bandwidth, bu er, energy level and mobility of nodes. After collecting the nodes information and before moving to next sector they transfer collected information to SA respectively. SA in turn coordinates with other modules to analyze the nodes status information.
We have validated the protocol by performing the conformance testing of the proposed node monitoring protocol (NMP) for MANETs. We used SDL to obtain MSCs, that repre-sents the scenario descriptions by sequence diagrams, which in turn generate test cases and test sequences. Then TTCN-3 is used to execute the test cases with respect to generated test sequences to know the conformance of protocol against the given speci cation.
We have proposed a location management protocol for locating the nodes of a MANET, to maintain uninterrupted high-quality service for distributed applications by intelligently anticipating the change of location of its nodes by chosen neighborhood nodes. The LMP main segment of the protocol, which runs at the chosen RRN located at the center of a MANET, uses SA to coordinate with other modules and MA to predict the nodes with abrupt movement, and does the replacement with the chosen nodes nearby which have less mobility.
We have proposed a resource management protocol for MANETs, The protocol makes use of SA and MA for fair allocation of resources among the nodes of a MANET. The RMP main segment of the protocol, which runs at the chosen RRN located at the center of a MANET, uses SA to coordinate with other modules and MA to allocate the resources among the nodes running di erent applications based on priority. The protocol does the distribution and parallelism of message propagation (mobile agent with information) in an e cient way in order to minimize the number of message passing with reduction in usage of network resources and improving the scalability of the network.
We have proposed a QoS management protocol for MANETs, The QMP main segment of the protocol, which runs at the chosen RRN located at the center of a MANET, uses SA to coordinate with other modules and MA to allocate the resources among the nodes running di erent applications based on priority over QoS. Later, to reallocate the resources among the priority applications based on negotiation and renegotiation for varying QoS requirements. The performance testing of the protocol is carried out using TTCN-3. The generated test cases for the de ned QoS requirements are executed with TTCN-3, for testing of the associated QoS parameters, which leads to performance testing of proposed QoS management protocol for MANETs.
We have combined the developed independent protocols for node monitoring, location management, resource management, and QoS management, into one single network management system called Network Management System using Agents (NMSA) for MANETs and tested in di erent environments. We have implemented NMSA on Java Agent development environment (JADE) Platform.
Our developed network management system is a distributed system. It is basically divided into two parts, the Network Management Main Segment and other is Network Management Subsegment. A resource rich node (RRN) which is chosen at the center of a MANET where the Main segment of NMSA is located, and it controls the management activities. The other mobile nodes in the network will run MA which has the subsegments of NMSA. The network management system, i.e., the developed NMSA, has Network manage-ment main (NMSA main), Zones and sector segregation scheme, NMP, LMP, RMP, QMP main segments at the RRN along with SA deployed. The migrated MA at mobile node has subsegments of NMP, LMP, RMP, and QMP respectively. NMSA uses two databases, namely, Zones and sectors database and Node history database.
Implementation of the proposed work is carried out in a con ned environment with, JDK and JADE installed on network nodes. The launched platform will have AMS and DF automatically generated along with MTP for exchange of message over the channel. Since only one JVM, which is installed, will executes on many hosts in order to provide the containers for agents on those hosts. It is the environment which o ered, for execution of agents. Many agents can be executed in parallel. The main container, is the one which has AMS and DF, and RMI registry are part of JADE environment which o ers complete run time environment for execution of agents. The distribution of the platform on many containers of nodes is shown in Fig. 1.
The NMSA is based on Linux platform which provides distributed environment, and the container of JADE could run on various platforms. JAVA is the language used for code development. A middle layer, i.e., JDBC (java database connection) with SQL provides connectivity to the database and the application.
The results of experiments suggest that the proposed protocols are e ective and will bring, dynamism and adaptiveness to the applied system and also reduction in terms network overhead (less bandwidth consumption) and response time.
|
107 |
PBQoS - uma arquitetura de gerenciamento baseado em políticas para distribuição otimizada de conteúdo multimídia com controle de QoS em redes Overlay. / PBQoS - a Policy-based management architecture for optimized multimedia content distribution to control the QoS in an Overlay network.Almeida, Fernando Luiz de 16 December 2010 (has links)
Avanços nas tecnologias de comunicação e processamento de sinais além de mudar a forma de como realizar negócios em todo o mundo, têm motivado o surgimento de serviços e aplicações multimídia na Internet de forma crescente. Como conseqüência, é possível conceber, desenvolver, implantar e operar serviços de distribuição de vídeo digital na Internet, tanto na abordagem sob-demanda quanto ao vivo. Com o aumento das aplicações multimídia na rede, torna-se cada vez mais complexo e necessário definir um modelo eficiente que possa realizar o gerenciamento efetivo e integrado de todos os elementos e serviços que compõe um sistema computacional. Pensando assim, este trabalho propõe uma arquitetura de gerenciamento baseado em políticas aplicada à distribuição de conteúdo multimídia com controle de QoS (Quality of Service) em redes de sobreposição (overlay). A arquitetura é baseada nos padrões de gerenciamento por políticas definida pela IETF (Internet Engineering Task Force) que, através de informações contextuais (rede e clientes) administra os serviços disponíveis no sistema. Faz uso dos requisitos de QoS providos pela rede de distribuição e os compara com os requisitos mínimos exigidos pelos perfis das aplicações previamente mapeados em regras de políticas. Dessa forma é possível controlar e administrar os elementos e serviços do sistema, afim de melhor distribuir recursos aos usuários deste sistema. / Advances in communication technologies and signal processing have not only changed the way business is conducted around the world, but have also driven the development of services and multimedia applications on the Internet. As a result, it is possible to design, develop, deploy and operate services for digital video distribution on the Internet, both according to an on-demand approach and live. Because of the increase in multimedia applications on the network, it has become increasingly more complex and necessary to define an efficient architecture that can achieve the effective and integrated management of all the elements and services that compose a computer system. With this in mind, this study proposes developing a robust and efficient architecture based on IETF (Internet Engineering Task Force) policy management standards applied to multimedia distribution content with QoS (Quality of Service) control in Overlay Networks. This architecture makes use of QoS requirements provided by the distribution network and compares them to the minimum requirements demanded by each type of application previously mapped in the policy rules. This system makes it possible to control and manage system information and services and also to distribute resources to system users better.
|
108 |
PBQoS - uma arquitetura de gerenciamento baseado em políticas para distribuição otimizada de conteúdo multimídia com controle de QoS em redes Overlay. / PBQoS - a Policy-based management architecture for optimized multimedia content distribution to control the QoS in an Overlay network.Fernando Luiz de Almeida 16 December 2010 (has links)
Avanços nas tecnologias de comunicação e processamento de sinais além de mudar a forma de como realizar negócios em todo o mundo, têm motivado o surgimento de serviços e aplicações multimídia na Internet de forma crescente. Como conseqüência, é possível conceber, desenvolver, implantar e operar serviços de distribuição de vídeo digital na Internet, tanto na abordagem sob-demanda quanto ao vivo. Com o aumento das aplicações multimídia na rede, torna-se cada vez mais complexo e necessário definir um modelo eficiente que possa realizar o gerenciamento efetivo e integrado de todos os elementos e serviços que compõe um sistema computacional. Pensando assim, este trabalho propõe uma arquitetura de gerenciamento baseado em políticas aplicada à distribuição de conteúdo multimídia com controle de QoS (Quality of Service) em redes de sobreposição (overlay). A arquitetura é baseada nos padrões de gerenciamento por políticas definida pela IETF (Internet Engineering Task Force) que, através de informações contextuais (rede e clientes) administra os serviços disponíveis no sistema. Faz uso dos requisitos de QoS providos pela rede de distribuição e os compara com os requisitos mínimos exigidos pelos perfis das aplicações previamente mapeados em regras de políticas. Dessa forma é possível controlar e administrar os elementos e serviços do sistema, afim de melhor distribuir recursos aos usuários deste sistema. / Advances in communication technologies and signal processing have not only changed the way business is conducted around the world, but have also driven the development of services and multimedia applications on the Internet. As a result, it is possible to design, develop, deploy and operate services for digital video distribution on the Internet, both according to an on-demand approach and live. Because of the increase in multimedia applications on the network, it has become increasingly more complex and necessary to define an efficient architecture that can achieve the effective and integrated management of all the elements and services that compose a computer system. With this in mind, this study proposes developing a robust and efficient architecture based on IETF (Internet Engineering Task Force) policy management standards applied to multimedia distribution content with QoS (Quality of Service) control in Overlay Networks. This architecture makes use of QoS requirements provided by the distribution network and compares them to the minimum requirements demanded by each type of application previously mapped in the policy rules. This system makes it possible to control and manage system information and services and also to distribute resources to system users better.
|
109 |
Ontology mapping: a logic-based approach with applications in selected domainsWong, Alfred Ka Yiu, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
In advent of the Semantic Web and recent standardization efforts, Ontology has quickly become a popular and core semantic technology. Ontology is seen as a solution provider to knowledge based systems. It facilitates tasks such as knowledge sharing, reuse and intelligent processing by computer agents. A key problem addressed by Ontology is the semantic interoperability problem. Interoperability in general is a common problem in different domain applications and semantic interoperability is the hardest and an ongoing research problem. It is required for systems to exchange knowledge and having the meaning of the knowledge accurately and automatically interpreted by the receiving systems. The innovation is to allow knowledge to be consumed and used accurately in a way that is not foreseen by the original creator. While Ontology promotes semantic interoperability across systems by unifying their knowledge bases through consensual understanding, common engineering and processing practices, it does not solve the semantic interoperability problem at the global level. As individuals are increasingly empowered with tools, ontologies will eventually be created more easily and rapidly at a near individual scale. Global semantic interoperability between heterogeneous ontologies created by small groups of individuals will then be required. Ontology mapping is a mechanism for providing semantic bridges between ontologies. While ontology mapping promotes semantic interoperability across ontologies, it is seen as the solution provider to the global semantic interoperability problem. However, there is no single ontology mapping solution that caters for all problem scenarios. Different applications would require different mapping techniques. In this thesis, we analyze the relations between ontology, semantic interoperability and ontology mapping, and promote an ontology-based semantic interoperability solution. We propose a novel ontology mapping approach namely, OntoMogic. It is based on first order logic and model theory. OntoMogic supports approximate mapping and produces structures (approximate entity correspondence) that represent alignment results between concepts. OntoMogic has been implemented as a coherent system and is applied in different application scenarios. We present case studies in the network configuration, security intrusion detection and IT governance & compliance management domain. The full process of ontology engineering to mapping has been demonstrated to promote ontology-based semantic interoperability.
|
110 |
Ontology mapping: a logic-based approach with applications in selected domainsWong, Alfred Ka Yiu, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
In advent of the Semantic Web and recent standardization efforts, Ontology has quickly become a popular and core semantic technology. Ontology is seen as a solution provider to knowledge based systems. It facilitates tasks such as knowledge sharing, reuse and intelligent processing by computer agents. A key problem addressed by Ontology is the semantic interoperability problem. Interoperability in general is a common problem in different domain applications and semantic interoperability is the hardest and an ongoing research problem. It is required for systems to exchange knowledge and having the meaning of the knowledge accurately and automatically interpreted by the receiving systems. The innovation is to allow knowledge to be consumed and used accurately in a way that is not foreseen by the original creator. While Ontology promotes semantic interoperability across systems by unifying their knowledge bases through consensual understanding, common engineering and processing practices, it does not solve the semantic interoperability problem at the global level. As individuals are increasingly empowered with tools, ontologies will eventually be created more easily and rapidly at a near individual scale. Global semantic interoperability between heterogeneous ontologies created by small groups of individuals will then be required. Ontology mapping is a mechanism for providing semantic bridges between ontologies. While ontology mapping promotes semantic interoperability across ontologies, it is seen as the solution provider to the global semantic interoperability problem. However, there is no single ontology mapping solution that caters for all problem scenarios. Different applications would require different mapping techniques. In this thesis, we analyze the relations between ontology, semantic interoperability and ontology mapping, and promote an ontology-based semantic interoperability solution. We propose a novel ontology mapping approach namely, OntoMogic. It is based on first order logic and model theory. OntoMogic supports approximate mapping and produces structures (approximate entity correspondence) that represent alignment results between concepts. OntoMogic has been implemented as a coherent system and is applied in different application scenarios. We present case studies in the network configuration, security intrusion detection and IT governance & compliance management domain. The full process of ontology engineering to mapping has been demonstrated to promote ontology-based semantic interoperability.
|
Page generated in 0.089 seconds