• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 13
  • 9
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 166
  • 97
  • 73
  • 62
  • 58
  • 35
  • 28
  • 26
  • 23
  • 21
  • 21
  • 20
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Improved performance high speed network intrusion detection systems (NIDS) : a high speed NIDS architectures to address limitations of packet loss and low detection rate by adoption of dynamic cluster architecture and traffic anomaly filtration (IADF)

Akhlaq, Monis January 2011 (has links)
Intrusion Detection Systems (IDS) are considered as a vital component in network security architecture. The system allows the administrator to detect unauthorized use of, or attack upon a computer, network or telecommunication infrastructure. There is no second thought on the necessity of these systems however; their performance remains a critical question. This research has focussed on designing a high performance Network Intrusion Detection Systems (NIDS) model. The work begins with the evaluation of Snort, an open source NIDS considered as a de-facto IDS standard. The motive behind the evaluation strategy is to analyze the performance of Snort and ascertain the causes of limited performance. Design and implementation of high performance techniques are considered as the final objective of this research. Snort has been evaluated on highly sophisticated test bench by employing evasive and avoidance strategies to simulate real-life normal and attack-like traffic. The test-methodology is based on the concept of stressing the system and degrading its performance in terms of its packet handling capacity. This has been achieved by normal traffic generation; fussing; traffic saturation; parallel dissimilar attacks; manipulation of background traffic, e.g. fragmentation, packet sequence disturbance and illegal packet insertion. The evaluation phase has lead us to two high performance designs, first distributed hardware architecture using cluster-based adoption and second cascaded phenomena of anomaly-based filtration and signature-based detection. The first high performance mechanism is based on Dynamic Cluster adoption using refined policy routing and Comparator Logic. The design is a two tier mechanism where front end of the cluster is the load-balancer which distributes traffic on pre-defined policy routing ensuring maximum utilization of cluster resources. The traffic load sharing mechanism reduces the packet drop by exchanging state information between load-balancer and cluster nodes and implementing switchovers between nodes in case the traffic exceeds pre-defined threshold limit. Finally, the recovery evaluation concept using Comparator Logic also enhance the overall efficiency by recovering lost data in switchovers, the retrieved data is than analyzed by the recovery NIDS to identify any leftover threats. Intelligent Anomaly Detection Filtration (IADF) using cascaded architecture of anomaly-based filtration and signature-based detection process is the second high performance design. The IADF design is used to preserve resources of NIDS by eliminating large portion of the traffic on well defined logics. In addition, the filtration concept augment the detection process by eliminating the part of malicious traffic which otherwise can go undetected by most of signature-based mechanisms. We have evaluated the mechanism to detect Denial of Service (DoS) and Probe attempts based by analyzing its performance on Defence Advanced Research Projects Agency (DARPA) dataset. The concept has also been supported by time-based normalized sampling mechanisms to incorporate normal traffic variations to reduce false alarms. Finally, we have observed that the IADF has augmented the overall detection process by reducing false alarms, increasing detection rate and incurring lesser data loss.
152

Distributed discovery and management of alternate internet paths with enhanced quality of service

Rakotoarivelo, Thierry, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
The convergence of recent technology advances opens the way to new ubiquitous environments, where network-enabled devices collectively form invisible pervasive computing and networking environments around the users. These users increasingly require extensive applications and capabilities from these devices. Recent approaches propose that cooperating service providers, at the edge of the network, offer these required capabilities (i.e services), instead of having them directly provided by the devices. Thus, the network evolves from a plain communication medium into an endless source of services. Such a service, namely an overlay application, is composed of multiple distributed application elements, which cooperate via a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS on the communication paths of the corresponding overlay association. This thesis asserts and shows that it is possible to provide QoS to an overlay application by using alternate Internet paths resulting from the compositions of independent consecutive paths. Moreover, this thesis also demonstrates that it is possible to discover, select and compose these independent paths in a distributed manner within an community comprising a limited large number of autonomous cooperating peers, such as the fore-mentioned service providers. Thus, the main contributions of this thesis are i) a comprehensive description and QoS characteristic analysis of these composite alternate paths, and ii) an original architecture, termed SPAD (Super-Peer based Alternate path Discovery), which allows the discovery and selection of these alternate paths in a distributed manner. SPAD is a fully distributed system with no single point of failure, which can be easily and incrementally deployed on the current Internet. It empowers the end-users at the edge of the network, allowing them to directly discover and utilize alternate paths.
153

Analysis and Design of Vehicular Networks

Wu, Hao 18 November 2005 (has links)
Advances in computing and wireless communication technologies have increased interest in smart vehicles, vehicles equipped with significant computing, communication and sensing capabilities to provide services to travelers. Smart vehicles can be exploited to improve driving safety and comfort as well as optimize surface transportation systems. Wireless communications among vehicles and between vehicles and roadside infrastructures represent an important class of vehicle communications. One can envision creating an integrated radio network leveraging various wireless technologies that work together in a seamless fashion. Based on cost-performance tradeoffs, different network configurations may be appropriate for different environments. An understanding of the properties of different vehicular network architectures is absolutely necessary before services can be successfully deployed. Based on this understanding, efficient data services (e.g., data dissemination services) can be designed to accommodate application requirements. This thesis examines several research topics concerning both the evaluation and design of vehicular networks. We explore the properties of vehicle-to-vehicle (v2v) communications. We study the spatial propagation of information along the road using v2v communications. Our analysis identifies the vehicle traffic characteristics that significantly affect information propagation. We also evaluate the feasibility of propagating information along a highway. Several design alternatives exist to build infrastructure-based vehicular networks. Their characteristics have been evaluated in a realistic vehicular environment. Based on these evaluations, we have developed some insights into the design of future broadband vehicular networks capable of adapting to varying vehicle traffic conditions. Based on the above analysis, opportunistic forwarding that exploit vehicle mobility to overcome vehicular network partitioning appears to be a viable approach for data dissemination using v2v communications for applications that can tolerate some data loss and delay. We introduce a methodology to design enhanced opportunistic forwarding algorithms. Practical algorithms derived from this methodology have exhibited different performance/overhead tradeoffs. An in-depth understanding of wireless communication performance in a vehicular environment is necessary to provide the groundwork for realizing reliable mobile communication services. We have conducted an extensive set of field experiments to uncover the performance of short-range communications between vehicles and between vehicles and roadside stations in a specific highway scenario.
154

System Support for End-to-End Performance Management

Agarwala, Sandip 09 July 2007 (has links)
This dissertation introduces, implements, and evaluates the novel concept of "Service Paths", which are system-level abstractions that capture and describe the dynamic dependencies between the different components of a distributed enterprise application. Service paths are dynamic because they capture the natural interactions between application services dynamically composed to offer some desired end user functionality. Service paths are distributed because such sets of services run on networked machines in distributed enterprise data centers. Service paths cross multiple levels of abstraction because they link end user application components like web browsers with system services like http providing communications with embedded services like hardware-supported data encryption. Service paths are system-level abstractions that are created without end user, application, or middleware input, but despite these facts, they are able to capture application-relevant performance metrics, including end-to-end latencies for client requests and the contributions to these latencies from application-level processes and from software/hardware resources like protocol stacks or network devices. Beyond conceiving of service paths and demonstrating their utility, this thesis makes three concrete technical contributions. First, we propose a set of signal analysis techniques called ``E2Eprof' that identify the service paths taken by different request classes across a distributed IT infrastructure and the time spent in each such path. It uses a novel algorithm called ``pathmap' that computes the correlation between the message arrival and departure timestamps at each participating node and detect dependencies among them. A second contribution is a system-level monitoring toolkit called ``SysProf', which captures monitoring information at different levels of granularity, ranging from tracking the system-level activities triggered by a single system call, to capturing the client-server interactions associated with a service paths, to characterizing the server resources consumed by sets of clients or client behaviors. The third contribution of the thesis is a publish-subscribe based monitoring data delivery framework called ``QMON'. QMON offers high levels of predictability for service delivery and supports utility-aware monitoring while also able to differentiate between different levels of service for monitoring, corresponding to the different classes of SLAs maintained for applications.
155

On Risk Management of Electrical Distribution Systems and the Impact of Regulations

Wallnerström, Carl Johan January 2008 (has links)
<p>The Swedish electricity market was de-regulated in 1996, followed by new laws and a new regulation applied to the natural monopolies of electrical distribution systems (EDS). These circumstances have motivated distribution systems operators (DSOs) to introduce more comprehensive analysis methods. The laws, the regulation and additional incentives have been investigated within this work and results from this study can be valuable when developing risk methods or other quantitative methods applied to EDS. This tendency is not unique for Sweden, the results from a comparative study of customer outage compensation laws between Sweden and UK is for example included.</p><p>As a part of investigating these incentives, studies of the Swedish regulation of customer network tariffs have been performed which provide valuable learning when developing regulation models in different countries. The Swedish regulatory model, referred to as the Network Performance Assessment Model (NPAM), was created for one of the first de-regulated electricity markets in the world and has a unique and novel approach. For the first time, an overall presentation of the NPAM has been performed including description of the underlying theory as a part of this work. However, the model has been met by difficulties and the future usage of the model is uncertain. Furthermore, the robustness of the NPAM has been evaluated in two studies with the main conclusion that the NPAM is sensitive toward small variations in input data. Results from these studies are explained theoretically investigating algorithms of the NPAM.</p><p>A pre-study of a project on developing international test systems is presented and this ongoing project aims to be a useful input when developing risk methods. An application study is included with the approach to systematically describe the overall risk management process at a DSO including an evaluation and ideas of future developments. The main objective is to support DSOs in the development of risk management, and to give academic reference material to utilize industry experience. An idea of a risk management classification has been concluded from this application study. The study provides an input to the final objective of a quantitative risk method.</p>
156

Highly variable real-time networks: an Ethernet/IP solution and application to railway trains

Constantopoulos, Vassilios 03 July 2006 (has links)
In this thesis we study the key requirements and solutions for the feasibility and application of Ethernet-TCP/IP technology to the networks we termed Highly-Variable Real-Time Networks (HVRN). This particular class of networks poses exceptionally demanding requirements because their physical and logical topologies are both temporally and spatially variable. We devised and introduced specific mechanisms for applying Ethernet-TCP/IP to HVRNs with particular emphasis on effective and reliable modular connectivity. Using a railroad train as a reference, this work analyzes the unique requirements of HVRNs and focuses on the backbone architecture for such a system under Ethernet and TCP/IP. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
157

Reusability and hierarchical simulation modeling of communication systems for performance evaluation: Simulation environment, basic and generic models, transfer protocols

Mrabet, Radouane 12 June 1995 (has links)
<p align="justify">The main contribution of this thesis is the emphasis made on the reusability concept, on one side, for designing a simulation environment, and on the other side, for defining two different levels of granularity for reusable network component libraries.</p><p><p align="justify">The design of our simulation environment, called AMS for Atelier for Modeling and Simulation, was based on existing pieces of software, which proved their usefulness in their respective fields. In order to carry out this integration efficiently, a modular structure of the atelier was proposed. The structure has been divided into four phases. Each phase is responsible of a part of the performance evaluation cycle. The main novelty of this structure is the usage of a dedicated language as a means to define a clear border between the editing and simulation phases and to allow the portability of the atelier upon different platforms. A prototype of the atelier has been developed on a SUN machine running the SunOs operating system. It is developed in C language.</p><p><p align="justify">The kernel of the AMS is its library of Detailed Basic Models (DBMs). Each DBM was designed in order to comply with the most important criterion which is reusability. Indeed, each DBM can be used in aeveral network architectures and can be a component of generic and composite models. Before the effective usage of a DBM, it is verified and validated in order to increase the model credibility. The most important contribution of this research is the definition of a methodology for modeling protocol entities as DBMs. We then tried to partly bridge the gap between specification and modeling. This methodology is based on the concept of function. Simple functions are modeled as reusable modules and stored into a library. The Function Based Methodology was designed to help the modeler to build efficiently and rapidly new protocols designed for the new generation of networks where several services can be provided. These new protocols can be dynamically tailored to the user' s requirements.</p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
158

An analysis of the correlation beween packet loss and network delay on the perfomance of congested networks and their impact: case study University of Fort Hare

Lutshete, Sizwe January 2013 (has links)
In this paper we study packet delay and loss rate at the University of Fort Hare network. The focus of this paper is to evaluate the information derived from a multipoint measurement of, University of Fort Hare network which will be collected for a duration of three Months during June 2011 to August 2011 at the TSC uplink and Ethernet hubs outside and inside relative to the Internet firewall host. The specific value of this data set lies in the end to end instrumentation of all devices operating at the packet level, combined with the duration of observation. We will provide measures for the normal day−to−day operation of the University of fort hare network both at off-peak and during peak hours. We expect to show the impact of delay and loss rate at the University of Fort Hare network. The data set will include a number of areas, where service quality (delay and packet loss) is extreme, moderate, good and we will examine the causes and impacts on network users.
159

Vers une solution de contrôle d’admission sécurisée dans les réseaux mesh sans fil / Towards a secure admission control in a wireless mesh networks

Dromard, Juliette 06 December 2013 (has links)
Les réseaux mesh sans fil (Wireless Mesh Networks-WMNs) sont des réseaux facilement déployables et à faible coût qui peuvent étendre l’Internet dans des zones où les autres réseaux peuvent difficilement accéder. Cependant, plusieurs problèmes de qualité de service (QoS) et de sécurité freinent le déploiement à grande échelle des WMNs. Dans cette thèse, nous proposons un modèle de contrôle d’admission (CA) et un système de réputation afin d’améliorer les performances du réseau mesh et de le protéger des nœuds malveillants. Notre système de CA vise à assurer la QoS des flux admis dans le réseau en termes de bande passante et de délai tout en maximisant l’utilisation de la capacité du canal. L’idée de notre solution est d’associer au contrôle d’admission une planification de liens afin d’augmenter la bande passante disponible. Nous proposons également un système de réputation ayant pour but de détecter les nœuds malveillants et de limiter les fausses alertes induites par la perte de paquets sur les liens du réseau. L’idée de notre solution est d’utiliser des tests statistiques comparant la perte de paquets sur les liens avec un modèle de perte préétabli. De plus, il comprend un système de surveillance composé de plusieurs modules lui permettant détecter un grand nombre d’attaques. Notre CA et notre système de réputation ont été validés, les résultats montrent qu’ils atteignent tous deux leurs objectifs / Wireless mesh networks (WMNs) are a very attractive new field of research. They are low cost, easily deployed and high performance solution to last mile broadband Internet access. However, they have to deal with security and quality of service issues which prevent them from being largely deployed. In order to overcome these problems, we propose in this thesis two solutions: an admission control with links scheduling and a reputation system which detects bad nodes. These solutions have been devised in order to further merge into a secure admission control. Our admission control schedules dynamically the network’s links each time a new flow is accepted in the network. Its goal is to accept only flows which constraints in terms of delay and bandwidth can be respected, increase the network capacity and decrease the packet loss. Our reputation system aims at assigning each node of the network a reputation which value reflects the real behavior of the node. To reach this goal this reputation system is made of a monitoring tool which can watch many types of attacks and consider the packet loss of the network. The evaluations of our solutions show that they both meet their objectives in terms of quality of service and security
160

Charged particle distributions and robustness of the neural network pixel clustering in ATLAS

Sidebo, Edvin January 2016 (has links)
This thesis contains a study of the robustness of the artificial neural network used in the ATLAS track reconstruction algorithm as a tool to recover tracks in dense environments. Different variations, motivated by potential discrepancies between data and simulation, are performed to the neural network’s input while monitoring the corresponding change in the output. Within reasonable variation magnitudes, the neural networks prove to be robust to most variations. In addition, a measurement of charged particle distributions is summarised. This is one of the first such measurements carried out for proton-proton colli- sions at √s = 13 TeV, limited to a phase space defined by transverse momentum pT &gt; 100 MeV and absolute pseudorapidity |η| &lt; 2.5. Tracks are corrected for de- tector inefficiencies and unfolded to particle-level. The result is compared to the prediction of different models. Overall, the EPOS and Pythia 8 A2 models show the best agreement with the data. / Spår från elektriskt laddade partiklar rekonstrueras i ATLAS genom att kombinera mätningar från de innersta subdetektorerna. I de extrema miljöer som skapas i proton-proton-kollisionerna i Large Hadron Collider vid CERN är det av yttersta vikt att algoritmen för att rekonstruera spår är högpresterande. Uppgiften är särskilt svår i partikelrika miljöer där flera partiklar färdas nära varandra, åtskilda av avstånd jämförbara med storleken på detektorns utläsningselement. Ett artificiellt neuralt nätverk används i algoritmen för att klassificera mätdata från pixeldetektorn, belägen närmast interaktionspunkten, för att lyckas identifiera spår i partikelrika miljöer som annars hade gått förlorade. I denna avhandling utreds det neurala nätverkets stabilitet. Dess känslighet studeras genom att manuellt manipulera dess indata och därefter utvärdera dess resultat. Nätverket tränas med simulerad data. Variationerna i indata är utformade för att undersöka skillnader mellan data och simulering, orsakade av osäkerheter i simuleringsmodellen eller osäkerheter i pixeldetektorns kalibrering. Av de undersökta variationerna har en osäkerhet i skalan eller utläsningströskeln för pixeldetektorns kalibrering den största effekten på nätverkets resultat. Andra variationer har en betydligt mindre påverkan. Avhandlingen presenterar också en studie av distributioner av elektriskt laddade partiklar producerade i proton-proton-kollisioner. Det är en av de första studierna av partikeldistributioner för Large Hadron Colliders andra körning med mass-centrum-energi √s = 13 TeV. Mätningen är begränsad till fasrymden definierad av en transversell rörelsemängd pT &gt; 100 MeV, och absolut rapiditet |η| &lt; 2.5. Spår av partiklar rekonstrueras och korrigeras för detektorns ineffektiviteter för att presenteras på partikelnivå. Dessa jämförs sedan med förutsägelser från olika modeller. Modellerna EPOS och Pythia 8 A2 är generellt de som bäst överensstämmer med data. Författaren har undersökt partiklar som migrerar in och ut ur fasrymden. Andelen spår associerade till partiklar som migrerat utifrån uppskattas med simulerad data, till som mest 10% nära fasrymdens gränser. Osäkerheten på denna andel uppskattas till att vara som mest 4.5%, huvudsakligen orsakad av osäkerheten på mängden material i de innersta subdetektorerna. / <p>QC 20160817</p>

Page generated in 0.0682 seconds