Spelling suggestions: "subject:"networkmanagement"" "subject:"workmanagement""
271 |
A reuse-based approach to promote the adoption of info vis tecniques for network and service management tasks / Uma abordagem baseda em reuso para promover a adoção de técnicas de visualização de informações no contexto de gerenciamento de redes de serviçosGuimarães, Vinícius Tavares January 2016 (has links)
Ao longo dos anos, diferentes ferramentas vem sendo utilizadas pelos administradores de rede para realizar as tarefas de gerenciamento (por exemplo, protocolos de gerência e soluções de monitoramento de rede). Dentre tais ferramentas, a presente tese foca em Visualização de Informações (ou simplesmente InfoVis). Essencialmente, entende-se que o objetivo final dessas ferramentas de gestão é diminuir a complexidade e, consequentemente, otimizar o trabalho diário dos administradores. Assim, eles podem melhorar sua produtividade, o que incide diretamente na redução de custos. Com base nesse pressuposto, esta tese tem como objetivo investigar como promover a adoção de técnicas InfoVis pelos administradores de rede, com foco em melhorar produtividade e diminuir custos. A percepção chave é que, na maioria dos casos, os administradores de rede não são habilitados no domínio InfoVis. Desse modo, a escolha por adotar técnica InfoVis requer a imersão em campo desconhecido, podendo gerar, assim, um risco elevado nos indicadores de produtividade e custos. Em essência, essa tese argumenta que o emprego de técnicas InfoVis pelos administradores pode ser muito trabalhosa, despendendo um montante muito significativo de tempo, o que leva a diminuir produtividade e, consequentemente, eleva os custos de gerenciamento. Focando essa questão, é apresentada uma proposta para promover adoção de técnicas InfoVis, pelo encorajamento do reuso. Argumenta-se que os conceitos e princípios de reuso propostos e padronizados pelo campo da engenharia de software podem ser adaptados e empregados, uma vez que a construção de visualizações (ou seja, o projeto e desenvolvimento) é, primariamente, uma tarefa de desenvolvimento de software. Assim, a avaliação da proposta apresentada nesta tese utiliza o método Common Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM), o qual permite estimar o dimensionamento de software através de pontos por função. A partir deste método, torna-se então possível a estimativa de esforço e, consequentemente, produtividade e custos. Os resultados mostram a viabilidade e eficácia da abordagem proposta (em termos de produtividade e custos), bem como os benefícios indiretos que o reuso sistemático pode fornecer quando da adoção de visualizações para auxílio nas tarefas de gerenciamento de redes. / Throughout the years, several tools have being used by network administrators to accomplish the management tasks (e.g., management protocols and network monitoring solutions). Among such tools, this thesis focuses on Information Visualization one (a.k.a InfoVis). Mainly, it is understood that the ultimate goal of these management tools is to decrease the complexity and, consequently, optimize the everyday work of administrators. Thus, they can increase their productivity, which leads to the cost reduction. Based on this assumption, this thesis aims at investigating how to promote the adoption of InfoVis techniques by network administrators, focusing on enhancing productivity and lowering costs. The key insight is that, in most cases, network administrators are unskilled on InfoVis. Therefore, the choice to adopt visualizations can require an immersion into the unknown that can be too risky regarding productivity and cost. In essence, this thesis argues that the employment of InfoVis techniques by administrators can be very laborious by spending a significant amount of effort that decreases their productivity and, consequently, increases the management costs. To overcome this issue, an approach to promote the adoption of InfoVis techniques by encouraging their reuse is introduced. It is argued that the concepts and principles of software reuse proposed and standardized in the software engineering field can be adapted and employed once the building up of visualizations (i.e., the design and development) can be defined primarily as a software development task. So, the evaluation of the proposal introduced in this thesis employs the Common Software Measurement International Consortium (COSMIC) Functional Size Measurement (FSM) method that measures software sizing through Function Points (FP). From this method, it was possible estimating effort and, consequently, productivity and costs. Results show the feasibility and effectiveness of the proposed approach (in terms of productivity and cost) as well as some indirect benefits that the systematic reuse can provide in the adoption of InfoVis techniques to assist in the management tasks.
|
272 |
Algoritmo e arquitetura para a localização de falhas em sistemas distribuídos. / Algorithm and architecture for fault localization in distributed systems.Jamil Kalil Naufal Júnior 30 May 2000 (has links)
Devido à tendência mundial de crescimento sem precedentes na história das telecomunicações, verifica-se atualmente um aumento crescente no segmento das redes de comunicações com respeito ao seu tamanho e em seus correspondentes elementos, tornando o seu gerenciamento uma atividade árdua e complexa do ponto de vista de sua operação. Adicione-se a isto o fato de que o sucesso no empreendimento dos diferentes negócios atuais, no contexto da utilização das redes de comunicações, é dependente exclusivamente da qualidade do serviço e funcionamento dessas redes. A ocorrência de falhas em qualquer sistema de comunicação é de certa forma inevitável e, portanto, mais críticos em sistemas de grande porte, seja em termos de quantidade e variedade de falhas. Dessa forma, é desejável que sejam desenvolvidas novas técnicas que permitam à rede de comunicação uma maior rapidez e eficiência na detecção e correção de operações sistêmicas anormais e, consequentemente, sejam consideradas como atributos prioritários em seu projeto. Em outras palavras, a implementação destas novas técnicas permitirá ao sistema a capacidade de detecção, isolação e reconfiguração de um dado componente falho com referência aos requisitos de maior rapidez e eficiência, aumentando sobremaneira a disponibilidade da rede. Neste trabalho de dissertação é proposto um algoritmo e uma arquitetura para o gerenciamento de falhas, além de verificar a sua aderência quanto ao requisito disponibilidade de rede. / Due to the unprecedented world growth trend in the telecommunications history, it is currently realized the enormous increase in the communication networks segment regarding in the size and in its correspondent elements as well, becoming its own management an arduous and complex activity in the operation point of view consideration. Also, it must be taken into account that the real success concerning the currently and different business enterprise in the communications networks usage context are exclusively dependant on the quality of the service and the correct working of these networks. The fault occurrence in any communication system is generally inevitable and therefore more critical when considering large systems configuration, in terms of number and variety of faults. In this way, it is desirable that new techniques be developed, allowing the communication networks the ability to handle faster and more efficiency the detection and correction in case of anormal systemic operations. Therefore, it must be considered as a priority attribute in the new networks design. In other words, the implementation of these new techniques will allow the systems to have the capacity to detect, isolate and reconfigure a given fault component, regarding the quickness and efficiency attribute, increasing the network availability. It is proposed in this dissertation, an algorithm and an architecture for fault management and how they are adhered to the network availability.
|
273 |
Redes de metrologia: um estudo de caso da rede de defesa e segurança do SIBRATEC / Metrology network: a case study on the metrology network of defense and security from SIBRATECMarisa Ferraz Figueira Pereira 23 February 2016 (has links)
Nesta pesquisa, objetivou-se entender os efeitos da possível melhoria da infraestrutura laboratorial dos laboratórios da Rede de metrologia de defesa e segurança (RDS) do Programa Sibratec e da atuação da gestão em rede na oferta de apoio e de serviços metrológicos às empresas do setor de defesa e segurança, dentro dos propósitos do projeto. Procurou-se também identificar a existência de lacunas na oferta de serviços de calibração/ensaio para suprir a demanda das indústrias de defesa e segurança, bem como analisar a adequação do projeto RDS a essas demandas das indústrias de defesa e segurança, tendo como propósito contribuir com informações para ações futuras. A pesquisa desenvolvida é do tipo qualitativo, com características de pesquisa exploratória, fundamentada em estudo de caso. Foi estruturada em duas partes, envolvendo coleta de dados primários e de dados secundários. Para a coleta dos dados primários foram elaborados dois questionários, sendo um (Questionário A) destinado aos cinco representantes dos laboratórios na RDS e outro (Questionário B) aos contatos das 63 empresas do setor de defesa e segurança que necessitam de serviços de calibração e de ensaios pertinentes às áreas de atuação dos laboratórios da RDS. Foram obtidas respostas de quatro representantes dos laboratórios da RDS e de 26 empresas do setor de defesa e segurança. Os dados secundários resultaram de pesquisa documental. A análise dos resultados foi feita tendo por base cinco dimensões definidas com o objetivo de organizar e melhorar o entendimento do cenário da pesquisa. São elas, abrangência do projeto, regionalidade, gestão em rede, rastreabilidade metrológica e importância e visibilidade da RDS. Os resultados indicaram que a atuação da RDS não interferiu, até então, na rastreabilidade metrológica dos produtos das empresas do setor de defesa e segurança participantes da pesquisa. / This study is focused on understanding the effects of the infrastructure improvement of these laboratories and the role of network management in offering support and metrological services to the defense and security sector enterprises, within the project purposes. It is also aimed identify gaps on offering calibration and, or testing services to supply demands of the defense and security industries, and analyze adequacy of RDS project to demands of defense and security industries, with the purpose to contribute with information for future actions. The experimental research is qualitative type, with exploratory research characteristics, based on case study. It was structured in two parts, involving primary data collection and secondary data. In order to collect the primary data two questionnaires were prepared, one (Questionnaire A) to the five RDS laboratories representatives and other (Questionnaire B) to the contacts of 63 defense and security enterprises which need calibration and test services, possible customers of RDS laboratories. Answers from four representatives of RDS laboratories and from 26 defense and security enterprises were obtained. The collection of secondary data was obtained from documentary research. The analysis was made based on five dimensions defined in order to organize and improve the understanding of the research setting. They are RDS project coverage, regional, network management, metrological traceability and importance and visibility of RDS. The results indicated that the performance of RDS does not interfere, by that time, in the metrological traceability of the products of the defense and security enterprises that participated in the research.
|
274 |
Design and Management of Collaborative Intrusion Detection NetworksFung, Carol January 2013 (has links)
In recent years network intrusions have become a severe threat to the privacy and safety of computer users. Recent cyber attacks compromise a large number of hosts to form botnets. Hackers not only aim at harvesting private data and identity information from compromised nodes, but also use the compromised nodes to launch attacks such as distributed denial-of-service (DDoS) attacks.
As a counter measure, Intrusion Detection Systems (IDS) are used to identify intrusions by comparing observable behavior against suspicious patterns.
Traditional IDSs monitor computer activities on a single host or network traffic in a sub-network. They do not have a global view of intrusions and are not effective in detecting fast spreading attacks, unknown, or new threats. In turn, they can achieve better detection accuracy through collaboration. An Intrusion Detection Network (IDN) is such a collaboration network allowing IDSs to exchange information with each other and to benefit from the collective knowledge and experience shared by others. IDNs enhance the overall accuracy of intrusion assessment as well as the ability to detect new intrusion types.
Building an effective IDN is however a challenging task. For example, adversaries may compromise some IDSs in the network and then leverage the compromised nodes to send false information, or even attack others in the network, which can compromise the efficiency of the IDN. It is, therefore, important for an IDN to detect and isolate malicious insiders. Another challenge is how to make efficient intrusion detection assessment based on the collective diagnosis from other IDSs. Appropriate selection of collaborators and incentive-compatible resource management in support of IDSs' interaction with others are also key challenges in IDN design.
To achieve efficiency, robustness, and scalability, we propose an IDN architecture and especially focus on the design of four of its essential components, namely, trust management, acquaintance management, resource management, and feedback aggregation. We evaluate our proposals and compare them with prominent ones in the literature and show their superiority using several metrics, including efficiency, robustness, scalability, incentive-compatibility, and fairness. Our IDN design provides guidelines for the deployment of a secure and scalable IDN where effective collaboration can be established between IDSs.
|
275 |
Methods of cooperative routing to optimize the lifetime of multi-hop wireless sensor networksJung, Jin Woo 05 April 2013 (has links)
This dissertation presents methods of extending the network lifetime of multi-hop wireless sensor networks (WSNs) through routing that uses cooperative transmission (CT), referred to as cooperative routing. CT can have a signal-to-noise ratio (SNR) advantage over non-CT schemes through cooperative diversity and simple aggregation of transmit power, and one of its abilities is to extend the communication range of a wireless device using this SNR advantage. In this research, we use the range-extension ability of CT as a tool to mitigate the energy-hole problem of multi-hop WSNs and extend the network lifetime.
The main contributions of this research are (i) an analytical model for a cooperative routing protocol with a deployment method, (ii) cooperative routing protocols that can extend the network lifetime, and (iii) formulating the lifetime-optimization problem for cooperative routing. The analytical model developed in this research theoretically proves that, in a situation where non-CT routing cannot avoid the energy-hole problem, our CT method can solve the problem. PROTECT, a CT method based on the analytical model, provides a very simple way of doing cooperative routing and can improve the lifetime of non-CT networks significantly. REACT, a cooperative routing protocol that uses the energy information of nodes, overcomes some of the limitations of PROTECT and can be applied to any existing non-CT routing protocol to improve the network lifetime. Using REACT and analytical approaches, we also show that cooperative routing can be beneficial in multi-hop energy-harvesting WSNs. By formulating and solving the lifetime-optimization problem of cooperative routing, which requires a much more sophisticated formulation than that of non-CT routing, we explore the optimal lifetime bounds and behaviors of cooperative routing. Finally, we study and design online cooperative routing methods that can perform close to the optimal cooperative routing.
|
276 |
Contributions to an advanced design of a Policy Management SystemReyes Muñoz, María Angélica 04 July 2003 (has links)
de la TesisLas redes de hoy en día presentan un gran crecimiento, una alta complejidad de gestión y los nuevos servicios tienen requerimientos cada vez más estrictos. Es por ello que las plataformas de gestión de la década pasada resultan inadecuadas en los nuevos escenarios de red. Esta Tesis es una contribución a los nuevos esquemas de gestión para redes de gran escala, en especial es una contribución a los sistemas de gestión basados en políticas, sin perder por ello, la compatibilidad con los sistemas de gestión que se ocupan actualmente como por ejemplo SNMP, la gestión basada en agentes, etc. Las investigaciones relacionadas hasta ahora con los sistemas de gestión basados en políticas se enfocan principalmente en la gestión de recursos locales y en el control de admisión. La Tesis que se sustenta en este trabajo ofrece una perspectiva de la utilización de las políticas en un contexto más amplio, se propone una arquitectura para la gestión de red utilizando directorios y roles de políticas, analizando las políticas desde su fase de diseño hasta su configuración en los elementos físicos de la redSe considera que la creación de políticas pueden llevarla a cabo diferentes entidades, por ejemplo cuando las crea el administrador de la red, cuando los propios usuarios crean sus políticas (políticas personalizadas), o bien cuando la red basándose en un conjunto de políticas previamente definidas crea a partir de ellas nuevas políticas (metapolíticas). En esta Tesis la representación de las políticas de alto nivel se basa en los modelos propuestos por el IETF y DMTF, el Policy Core Information Model (PCIM) y sus extensiones (PCIMe). Se propone un esquema de clases orientadas a objetos para el almacenamiento de las políticas en un directorio LDAP (Lightweight Directory Access Protocol). Este esquema es una de las contribuciones que está Tesis realiza, la cual se ve reflejada en un draft realizado en conjunción con el grupo de trabajo de políticas del IETF.Debido a que no es posible implementar directamente las políticas de alto nivel en los elementos físicos de la red es necesario establecer un conjunto de parámetros de configuración de red que definan la política que debe aplicarse. Para resolver este mapeo se crearon perfiles SLS (Service Level Specification) basados en la especificación de nivel de servicio que el usuario acuerda con el proveedor de servicio Internet. En la implementación realizada se decidió utilizar cuatro perfiles, sin embargo la granularidad que se elija en la creación de perfiles SLS se deja abierta para que el administrador de la red cree los perfiles necesarios de acuerdo con las características topológicas de la red, los objetivos empresariales, etc.El directorio LDAP que se utiliza como repositorio de políticas almacena cientos o miles de políticas que son necesarias para resolver las diferentes tareas de gestión involucradas en un sistema de redes heterogéneas, esto puede afectar la ejecución del sistema, por lo tanto, se diseñaron métodos basados en roles de políticas para seleccionar la política o el conjunto de políticas adecuado que debe implementarse en la red en un momento especifico. Para resolver los conflictos que puedan ocurrir entre las políticas seleccionadas y evitar inconsistencias en la red, se crearon diversos módulos para la prevención y resolución de conflictos entre políticas. El primer proceso interviene en la creación de las políticas detectando conflictos sintácticos, es decir, se analiza que la política este correctamente diseñada y que pueda ser interpretada sin problemas por la red, posteriormente se verifica que la política pueda implementarse en los elementos de la topología de red que se utilice y que cubra los objetivos empresariales existentes.Para el caso de conflictos que puedan ocurrir en tiempo de ejecución se diseñó un método basado en espacios hiper-geométricos que permiten identificar un conflicto potencial e indicar la política adecuada que debe implementarse en la red. Dicho método está basado en una serie de métricas propuestas para definir cada servicio. Se realiza en la Tesis una aplicación de dicho método para el encaminamiento basado en restricciones de Calidad de Servicio en una red con Servicios Diferenciados y MPLS. / In today's telecommunications world the networks offer several new services involving higher and higher requirements, it means an increment of management complexity that cannot be adequately solved with the management platforms of previous years. This thesis is a contribution to new management schemes for big-scale networks; especially it is a set of contributions to the Policy-Based Management Systems (PBMS) without loosing compatibility with the current management systems such as SNMP, agent-based management, etc.Current research mainly proposes the use of policies to configure network local devices and admission control. This thesis works on a wide perspective about the use of policies. An efficiently architecture for network management on the basis of directories and policy roles is proposed, also there is a full analysis of policies from its design to its implementation in the network elements. The creation of policies can be carried out by different entities, for example network administrators, users (personalized policies) and when the network itself creates own policies based on a previous set of policies (metapolicies). In this thesis the representation of high-level policies is based on the Policy Core Information Model (PCIM) and their extensions (PCIMe) from the DMTF and the IETF. Policies are stored in a directory using the Lightweight Directory Access Protocol (LDAP) via an object oriented classes model designed in this thesis. These results led to an Internet draft for the policy-working group of the IETF.Because of direct implementation of high-level policies in the network elements it is not possible, it is necessary to establish a set of configuration parameters that define the policy that has to be enforced in the network. The methodology to map high-level policies to low-level policies is detailed in this thesis. Mapping processes involve the use of policy roles and profiles that come from Service Level Specifications (SLS) that users agree with the network. The implementation of the management system uses four SLS profiles but it is scalable to allow increasing profiles according to different aspects such as new services offered by the network, topology of the network, business goals, etc.The policy architecture manages heterogeneous interconnected networks, for this reason policy repositories have to be able of storing hundreds or thousands of policies in order to get the desired behavior in the entire network. Due to the fact that policy decision points have to choose adequate policies to apply in the network from a very big set of policies, the network performance could be affected. This thesis proposes an efficient selection and evaluation process on the basis of both, policy roles and the network status in a specific time.To solve possible conflicts that can occur between selected policies and avoid system inconsistencies, a set of models for the prevention and resolution of conflicts between policies is proposed. Prevention process has an algorithm to avoid syntactic conflicts and edition of new policies that produce potential conflicts with previous defined policies. Prevention process also considers congruency among policies, business goals and network topology.Conflict resolution process solves conflicts occurring during the performance of the system, this method is based on hyper geometrical spaces and policy roles to select the adequate policy, from the conflicting policies. These methods are presented in the Thesis with an application in a routing system with Quality of Service (QoS) restrictions into a network scenario based on Differentiated Services and the Multi Protocol Label Switching (MPLS).
|
277 |
Complexity of Supply Chains : A Case Study of Purchasing Activities and RelationshipsHanebrant, Magnus, Kinderbäck, Emil January 2013 (has links)
Executive Summary In the complex world of today with customers as well as suppliers scattered around the world the inevitable outcome is complexity. Going back to the early days of industrialism companies to a large extent owned the whole chain from supplies to sales of the final products. An example is Ford, the company controlled almost the entire chain, they even established their own rubber plantation. During the last decades companies have switched to a more intense focus on their core competences leaving supporting services, raw material and components to others. Again, the manufacturing industry, using Ford as an example, uses sub-suppliers for components and material. Partly this is because today there is a far broader variety in what is produced according to customer’s different demands. Earlier people simply bought a car but today people have varying needs as well as a desire to express themselves by choosing model, color, rims et cetera. Today these companies are to a larger extent characterized as devel-opers-designers-assemblers. The choice was to investigate FläktWoods Jönköping, a Swedish company, part of FläktWoods Group. The company has been producing climate control equipment since 1918 as is considered as one of the world leaders in its line of business. Some of this company’s customer and product categories have been investigated together with relevant competition and relationships. An investigation regarding some of FläktWoods supplier categories and the related issues competition and relationships has also been performed. This has been done in order to understand how these matters are connected and affect each other as well as develop guidelines to handle these matters. In-terviews with different managers in the company have been conducted and the results were compared to related scientific literature. By studying FläktWoods certain patterns of internal as well as external relationships were found. It became clear that with an increased customer perceived complexity of products sold as well as complexity of components purchased by FläktWoods the importance and complexity of internal as well as external relationships increased. Also, with less competi-tion relationships also increased in importance. The outcome of these patterns is a framework structured in a number of steps that helps in forming these relationships by considering the nature of the products, components and competition. This can be seen as a tool for FläktWoods and potentially for other manufac-turing companies when forming different relationships.
|
278 |
Fault Detection and Identification in Computer Networks: A soft Computing ApproachMohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for
their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new
algorithms are proposed for both types of techniques so as address these handicaps.
Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and
isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used
for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system.
Passive techniques employ alarms emitted by network entities. However, the fault
evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and
random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms.
Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy
information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of
this framework has been investigated based on extensive experiments.
The proposed fault management system is able to detect malfunctioning behavior
in the managed network with considerably less management traffic. Moreover, it
effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
|
279 |
Fault Detection and Identification in Computer Networks: A soft Computing ApproachMohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for
their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new
algorithms are proposed for both types of techniques so as address these handicaps.
Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and
isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used
for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system.
Passive techniques employ alarms emitted by network entities. However, the fault
evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and
random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms.
Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy
information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of
this framework has been investigated based on extensive experiments.
The proposed fault management system is able to detect malfunctioning behavior
in the managed network with considerably less management traffic. Moreover, it
effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
|
280 |
DiffServ/MPLS Network Design and ManagementAnjali, Tricha 09 April 2004 (has links)
The MultiProtocol Label Switching (MPLS) framework is used in many networks to provide efficient load balancing which distributes the traffic for efficient Quality of Service (QoS) provisioning in the network. If the MPLS framework is combined with Differentiated Services (DiffServ) architecture, together they can provide aggregate-based service differentiation and QoS. The combined use of DiffServ and MPLS in a network is called DiffServ-aware Traffic Engineering (DS-TE). Such DiffServ-based MPLS networks demand development of efficient methods for QoS provisioning. In this thesis, an automated manager for management of these DiffServ-based MPLS networks is proposed. This manager, called Traffic Engineering Automated Manager (TEAM), is a centralized authority for adaptively managing a DiffServ/MPLS domain and it is responsible for dynamic bandwidth and route management. TEAM is designed to provide a novel and unique architecture capable of managing large scale MPLS/DiffServ domains without any human interference. TEAM constantly monitors the network state and reconfigures the network for efficient handling of network events. Under the umbrella of TEAM, new schemes for Label Switched Path (LSP) setup/teardown, traffic routing, and network measurement are proposed and evaluated through simulations. Also, extensions to include Generalized MPLS (GMPLS) networks and inter-domain management are proposed.
|
Page generated in 0.0932 seconds