• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 143
  • 18
  • 10
  • 9
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 444
  • 444
  • 165
  • 163
  • 147
  • 129
  • 109
  • 101
  • 79
  • 60
  • 43
  • 42
  • 40
  • 39
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Policy based network management of legacy network elements in next generation networks for voice services

Naidoo, Vaughn January 2002 (has links)
Magister Scientiae - MSc / Telecommunication companies, service providers and large companies are now adapting converged multi-service Next Generation Networks (NGNs). Network management is shifting from managing Network Elements (NE) to managing services. This paradigm shift coincides with the rapid development of Quality of Service (QoS) protocols for IP networks. NEs and services are managed with Policy Based Network Management (PBNM) which is most concerned with managing services that require QoS using the Common Open Policy Service (COPS) Protocol. These services include Voice over IP (VoIP), video conferencing and video streaming. It follows that legacy NEs without support for QoS need to be replaced and/or excluded from the network. However, since most of these services run over IP, and legacy NEs easily supports IP, it may be unnecessary to throw away legacy NEs if it can be made to fit within a PBNM approach. Our approach enables an existing PBNM system to include legacy NEs in its management paradigm. The Proxy-Policy Enforcement Point (P-PEP) and Queuing Policy Enforcement Point (Q-PEP) can enforce some degree of traffic shaping on a gateway to the legacy portion of the network. The P-PEP utilises firewall techniques using the common legacy and contemporary NE management protocol Simple Network Management Protocol (SNMP) while the Q-PEP uses queuing techniques in the form Class Based Queuing (CBQ) and Random Early Discard (RED) for traffic control. / South Africa
112

Integrated Network Management Using Extended Blackboard Architecture

Prem Kumar, G 07 1900 (has links) (PDF)
No description available.
113

A Comparison of Pull- and Push- based Network Monitoring Solutions : Examining Bandwidth and System Resource Usage

Pettersson, Erik January 2021 (has links)
Monitoring of computer networks is central to ensuring that they function as intended, with solutions based on SNMP being used since the inception of the protocol. SNMP is however increasingly being challenged by solutions that, instead of requiring a request-response message flow, simply send information to a central collector at predefined intervals. These solutions are often based on Protobuf and gRPC, which are supported and promoted by equipment manufacturers such as Cisco, Huawei, and Juniper. Two models exist for monitoring. The pull model used by SNMP where requests are sent out in order to retrieve data, has historically been widely used. The push model, where data is sent at predefined intervals without a preceding request, is used by the implementations using Protobuf and gRPC. There is a perceived need to understand which model more efficiently uses bandwidth and the monitored system’s memory and processing resources. The purpose of the thesis is to compare two monitoring solutions, one being SNMP, and one based on Protobuf and gRPC. This is done to determine if one solution makes more efficient use of bandwidth and the system resources available to the network equipment. This could aid those who operate networks or develop monitoring software in determining how to implement their solutions. The study is conducted as a case study, where two routers manufactured by Cisco and Huawei were used to gather data about the bandwidth, memory, and CPU utilisation of the two solutions. The results of the measurements show that when retrieving information about objects that have 1-byte values SNMP was the better performer. When objects with larger values were retrieved SNMP performed best until 26 objects were retrieved per message. Above this point the combination of Protobuf and gRPC performed better, resulting in fewer bytes being sent for a given number of objects. No impact on the memory and CPU utilisation in the routers was shown. / Övervakning av nätverk är av yttersta vikt för att säkerställa att de fungerar som tänkt. Lösningar baserade på SNMP har använts sen protokollet kom till. SNMP utmanas mer och mer av lösningar som, istället för att använda ett meddelandeflöde baserat på fråga-svar, helt enkelt sänder information till en insamlande enhet i fördefinierade intervall. Dessa lösningar baseras ofta på Protobuf och gRPC, vilka stöds och propageras för av bland andra utrustningstillverkarna Cisco, Huawei, och Juniper. Två modeller för övervakning finns. Pull-modellen där frågor skickas ut för att hämta data, används av SNMP och har historiskt sett använts i stor skala. Push- modellen, där data skickas i fördefinierade intervall utan föregående fråga, används av lösningar som använder Protobuf och gRPC. Det finns ett behov av att förstå vilken modell som på ett mer effektivt sätt använder bandbredd och de övervakade systemens minnes- och processorresurser. Syftet med denna rapport är att jämföra två övervakningslösningar. SNMP är den ena lösningen, och den andra lösningen är baserad på Protobuf och gRPC. Detta i syfte att utröna om endera lösning på ett mer effektivt sätt använder bandbredd och systemresurser i nätverksutrustning. Detta kan hjälpa nätverksoperatörer och utvecklare av mjukvara för övervakning att avgöra hur dessa bör implementeras. För att besvara detta används en fallstudie, där två routrar tillverkade av Cisco och Huawei används för att samla in data om de två lösningarnas användning av bandbredd, minne, och processorkraft. Mätningarnas resultat visade att när objekt vars värde var 1 byte hämtades så presterade SNMP bättre. När objekt vars värden var större hämtades presterade SNMP bäst tills 26 objekt hämtades per meddelande. Därefter presterade kombinationen Protobuf och gRPC bättre, och krävde färre bytes för att skicka information om ett givet antal objekt. Ingen påverkan på minnes- eller processoranvändningen i routrarna påvisades av mätresultaten.
114

CCSDS SPACE LINK EXTENSION SERVICE MANAGEMENT STANDARDS AND PROTOTYPING ACTIVITIES

Pietras, John 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Consultative Committee for Space Data Systems (CCSDS) is developing standards for the interface through which spaceflight mission managers request tracking, telemetry, and command (TT&C) and Space Link Extension (SLE) services from TT&C ground stations and networks. The standards are intended for use not just by the spaceflight projects and networks operated by the CCSDS member agencies, but also by commercial networks and networks operated by other governmental agencies. As part of the process of developing the standards, several prototypes are under development. This paper presents a summary status of both the emerging service request standards and the prototypes that implement them.
115

Maximising renewable hosting capacity in electricity networks

Sun, Wei January 2015 (has links)
The electricity network is undergoing significant changes in the transition to a low carbon system. The growth of renewable distributed generation (DG) creates a number of technical and economic challenges in the electricity network. While the development of the smart grid promises alternative ways to manage network constraints, their impact on the ability of the network to accommodate DG – the ‘hosting capacity’- is not fully understood. It is of significance for both DNOs and DGs developers to quantify the hosting capacity according to given technical or commercial objectives while subject to a set of predefined limits. The combinational nature of the hosting capacity problem, together with the intermittent nature of renewable generation and the complex actions of smart control systems, means evaluation of hosting capacity requires appropriate optimisation techniques. This thesis extends the knowledge of hosting capacity. Three specific but related areas are examined to fill the gaps identified in existing knowledge. New evaluation methods are developed that allow the study of hosting capacity (1) under different curtailment priority rules, (2) with harmonic distortion limits, and (3) alongside energy storage systems. These works together improve DG planning in two directions: demonstrating the benefit provided by a range of smart grid solutions; and evaluating extensive impacts to ensure compliance with all relevant planning standards and grid codes. As an outcome, the methods developed can help both DNOs and DG developers make sound and practical decisions, facilitating the integration of renewable DG in a more cost-effective way.
116

Intelligent-Agent-Based Management of Heterogeneous Networks for the Army Enterprise

Richards, Clyde E., Jr. 09 1900 (has links)
Approved for public release; distribution in unlimited. / The Army is undergoing a major realignment in accordance with the Joint Vision 2010/2020 transformation to establish an enterprise command that is the single authority to operate and manage the Army Enterprise Information Infrastructure (Infrastructure). However, there are a number of critical network management issues that the Army will have to overcome before attaining the full capabilities to manage the full spectrum of Army networks at the enterprise level. The Army network environment consists of an excessive number of heterogeneous applications, systems, and network architectures that are incompatible. There are a number of legacy systems and proprietary platforms. Most of the NM architectures in the Army are based on traditional centralized NM approaches such as the Simple Network Management Protocol (SNMP). Although SNMP is the most pervasive protocol, it lacks the scalability, reliability, flexibility and adaptability necessary to effectively support an enterprise network as large and complex as the Army. Attempting to scale these technologies to this magnitude can be extremely difficult and very costly. This thesis makes the argument that intelligent-agent-based technologies are a leading solution, among the other current technologies, to achieve the Army's enterprise network management goals. / Major, United States Army
117

SNMP over Wi-Fi wireless networks

Kerdsri, Jiradett 03 1900 (has links)
Approved for public release; distribution is unlimited / Simple Network Management Protocol (SNMP) allows users of network equipment (i.e. Network Administrators) to remotely query the state of any device being tested for system load, utilization and configuration. Windows NT, Windows 2000 and Windows XP Professional are all equipped with SNMP service so that an SNMP manager can communicate with an SNMP agent running on a wireless 802.11b client. However the rest of Windows operating systems, including Windows CE and a Pocket PC, have to run third party proxy SNMP agents in order to be recognized by an SNMP management application. This thesis describes an implementation of a Pocket PC SNMP agent for two Pocket PC mobile devices accessing a wired network via an 802.11b wireless link. As a result of the implementation performed in this thesis, an SNMP manager can wirelessly communicate with a Pocket PC client. However, other results found that only some of the commercially available SNMP managers are able to access the mobile SNMP client and its management information base, due to incompatible implementations of the server and client software. / Lieutenant, Royal Thai Air Force
118

Cognitive Radio Networks : Elements and Architectures

Popescu, Alexandru January 2014 (has links)
As mobility and computing becomes ever more pervasive in society and business, the non-optimal use of radio resources has created many new challenges for telecommunication operators. Usage patterns of modern wireless handheld devices, such as smartphones and surfboards, have indicated that the signaling traffic generated is many times larger than at a traditional laptop. Furthermore, in spite of approaching theoretical limits by, e.g., the spectral efficiency improvements brought by 4G, this is still not sufficient for many practical applications demanded by end users. Essentially, users located at the edge of a cell cannot achieve the high data throughputs promised by 4G specifications. Worst yet, the Quality of Service bottlenecks in 4G networks are expected to become a major issue over the next years given the rapid growth of mobile devices. The main problems are because of rigid mobile systems architectures with limited possibilities to reconfigure terminals and base stations depending on spectrum availability. Consequently, new solutions must be developed that coexist with legacy infrastructures and more importantly improve upon them to enable flexibility in the modes of operation. To control the intelligence required for such modes of operation, cognitive radio technology is a key concept suggested to be part of the so-called beyond 4th generation mobile networks. The basic idea is to allow unlicensed users access to licensed spectrum, under the condition that the interference perceived by the licensed users is minimal. This can be achieved with the help of devices capable of accurately sensing the spectrum occupancy, learning about temporarily unused frequency bands and able to reconfigure their transmission parameters in such a way that the spectral opportunities can be effectively exploited. Accordingly, this indicates the need for a more flexible and dynamic allocation of the spectrum resources, which requires a new approach to cognitive radio network management. Subsequently, a novel architecture designed at the application layer is suggested to manage communication in cognitive radio networks. The goal is to improve the performance in a cognitive radio network by sensing, learning, optimization and adaptation.
119

[en] A SOFTWARE AGENTS BASED ARCHITECTURE FOR THE AUTOMATION OF FAULT MANAGEMENT PROCESSES IN TELECOMMUNICATIONS NETWORKS / [pt] UMA ARQUITETURA BASEADA EM AGENTES DE SOFTWARE PARA A AUTOMAÇÃO DE PROCESSOS DE GERÊNCIA DE FALHAS EM REDES DE TELECOMUNICAÇÕES

ADOLFO GUILHERME SILVA CORREIA 11 October 2007 (has links)
[pt] Os últimos anos têm sido marcados pelo significativo crescimento em todo o mundo da demanda por serviços de telecomunicações. Tal cenário de expansão de redes e da necessidade de coexistência e interoperabilidade de diferentes tecnologias de forma economicamente viável proporciona grandes desafios para a gerência, operação e manutenção de redes de telecomunicações. O presente trabalho apresenta alguns dos principais modelos e paradigmas de gerência de redes tradicionalmente empregados em redes de telecomunicações e que ainda hoje são amplamente utilizados pela indústria. Muitos dos modelos apresentados foram significativamente influenciados por conceitos e técnicas oriundos da área de engenharia de software. Uma grande ênfase é dada particularmente ao uso de técnicas baseadas em agentes de software para gerência de redes. Para tanto, importantes conceitos sobre agentes de software são apresentados, assim como exemplos de trabalhos em que agentes de software são utilizados no domínio de gerência de redes. Por fim, é proposta uma arquitetura baseada em agentes de software para gerência de falhas em redes legadas de telecomunicações, que são comumente gerenciadas por sistemas centralizados. O objetivo principal desta arquitetura é permitir o diagnóstico e a correção de falhas de rede de forma a não sobrecarregar o sistema centralizado de gerência. Para tanto, são utilizados agentes de software que distribuem informações mantidas no sistema centralizado para outros agentes do sistema. Desta forma, é possível que os agentes responsáveis por executar os procedimentos de diagnóstico e correção de falhas desempenhem suas atividades sem a necessidade de uma comunicação direta com o sistema centralizado. / [en] The last few years have been marked by a significant and worldwide growth in the demand for telecommunications services. Such scenery of network expansion and the need for coexistence and interoperability of different technologies in an economically viable way provides great challenges for the management, operation and maintenance of telecommunications networks. This work presents some of the main network management models and paradigms traditionally employed in telecommunications networks and that still count with wide adoption in the industry as of this day. Many of the presented models have been significantly influenced by concepts and techniques originated in the software engineering field. A great emphasis is particularly given to the use of network management techniques based on software agents. To this end, important concepts of software agents are presented, as well as examples of works where software agents are used in the network management domain. Finally, an architecture based on software agents used for fault management in legacy telecommunications networks, which are usually managed by centralized systems, is proposed. The main objective of this architecture is to allow the diagnosis and the correction of network faults in a way not to overload the centralized management system. To this end, the architecture uses software agents that distribute information maintained in the centralized management system to other agents of the system. In such way, it is possible for the agents responsible for executing the fault diagnosis and correction procedures to perform their activities without the necessity for direct communication with the centralized system.
120

[en] RFID APPLIED TO OPTICAL SPECTRUM FOR METROPOLITAN RING NETWORK RESOURCES INVENTORY AND ITS ECONOMIC IMPACT / [pt] UTILIZAÇÃO DA TECNOLOGIA RFID APLICADA NO ESPECTRO ÓPTICO PARA AVALIAÇÃO DOS RECURSOS DISPONÍVEIS EM ANÉIS METROPOLITANOS E SEU IMPACTO ECONÔMICO

CLAUDIA BARUCKE MARCONDES PAES LEME 15 June 2009 (has links)
[pt] Ao longo do presente trabalho é apresentado o desenvolvimento de um sistema operando na camada óptica física, que avalia o inventário de capacidade de um anel óptico metropolitano. O sistema de RFID tradicional, utilizado comercialmente para avaliação de estoques de uma cadeia de suprimentos, foi adaptado para descrever a capacidade disponível dos elementos de rede dos nós deste anel metropolitano. Para esta adaptação foi criado um código utilizando subportadoras de RF, denominado EPC Telecom, compatível com os procedimentos usuais aplicados na leitura e avaliação de etiquetas RFID tradicionais. A partir do sistema RFID assim desenvolvido, o conceito de cadeia de valores segundo Michael Porter é utilizado para avaliar o desempenho econômico de operadoras de telecomunicações e os possíveis impactos quando da utilização do sistema proposto. / [en] In this work, by using RFID, a technology globally employed in logistic and supply chain control, a system operating at optical physical layer implements a real time, reliable and distributed capacity inventory evaluation of metropolitan ring networks. This system is achieved by introducing a binary combination of RF subcarriers in the optical spectrum. The array of subcarriers is generated through a new proposed code, denominated EPC Telecom, compatible with traditional RFID codes. Also, the concept of value chain according Michael Porter is employed to evaluate the economic performance of telecommunications operators and the impact of this new RFID system in these operators.

Page generated in 0.0806 seconds