• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 10
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 19
  • 17
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Analysis of RED packet loss performance in a simulated IP WAN

Engelbrecht, Nico 26 June 2013 (has links)
The Internet supports a diverse number of applications, which have different requirements for a number of services. Next generation networks provide high speed connectivity between hosts, which leaves the service provider to configure network devices appropriately, in order to maximize network performance. Service provider settings are based on best recommendation parameters, which give an opportunity to optimize these settings even further. This dissertation focuses on a packet discarding algorithm, known as random early detection (RED), to determine parameters which will maximize utilization of a resource. The two dominant traffic protocols used across an IP backbone are UDP and TCP. UDP traffic flows transmit packets regardless of network conditions, dropping packets without changing its transmission rates. However, TCP traffic flows concern itself with the network condition, reducing its packet transmission rate based on packet loss. Packet loss indicates that a network is congested. The sliding window concept, also known as the TCP congestion window, adjusts to the amount of acknowledgements the source node receives from the destination node. This paradigm provides a means to transmit data across the available bandwidth across a network. A well known and widely implemented simulation environment, the network simulator 2 (NS2), was used to analyze the RED mechanism. The network simulator 2 (NS2) software gained its popularity as being a complex networking simulation tool. Network protocol traffic (UDP and TCP) characteristics comply with theory, which verifies that the traffic generated by this simulator is valid. It is shown that the autocorrelation function differs between these two traffic types, verifying that the generated traffic does conform to theoretical and practical results. UDP traffic has a short-range dependency while TCP traffic has a long-range dependency. Simulation results show the effects of the RED algorithm on network traffic and equipment performance. It is shown that random packet discarding improves source transmission rate stabilization, as well as node utilization. If the packet dropping probability is set high, the TCP source transmission rates will be low, but a low packet drop probability provides high transmission rates to a few sources and low transmission rates to the majority of other sources. Therefore, an ideal packet drop probability was obtained to complement TCP source transmission rates and node utilization. Statistical distributions were fitted to sampled data from the simulations, which also show improvements to the network with random packet discarding. The results obtained contribute to congestion control across wide area networks. Even though a number of queuing management implementation exists, RED is the most widely used implementation used by service providers. / Dissertation (MEng)--University of Pretoria, 2013. / Electrical, Electronic and Computer Engineering / unrestricted
32

Toward Distributed At-scale Hybrid Network Test with Emulation and Simulation Symbiosis

Rong, Rong 28 September 2016 (has links)
In the past decade or so, significant advances were made in the field of Future Internet Architecture (FIA) design. Undoubtedly, the size of Future Internet will increase tremendously, and so will the complexity of its users’ behaviors. This advancement means most of future Internet applications and services can only achieve and demonstrate full potential on a large-scale basis. The development of network testbeds that can validate key design decisions and expose operational issues at scale is essential to FIA research. In conjunction with the development and advancement of FIA, cyber-infrastructure testbeds have also achieved remarkable progress. For meaningful network studies, it is indispensable to utilize cyber-infrastructure testbeds appropriately in order to obtain accurate experiment results. That said, existing current network experimentation is intrinsically deficient. The existing testbeds do not offer scalability, flexibility, and realism at the same time. This dissertation aims to construct a hybrid system of conducting at-scale network studies and experiments by exploiting the distributed computing ability of current testbeds. First, this work presents a synchronization of parallel discrete event simulation that offers the simulation with transparent scalability and performance on various high-end computing platforms. The parallel simulator that we implement is configured so that it can self-adapt for the performance while running on supercomputers with disparate architectures. The simulator could be used to handle models of different sizes, varying modeling details, and different complexity levels. Second, this works addresses the issue of researching network design and implementation realistically at scale, through the use of distributed cyber-infrastructure testbeds. An existing symbiotic approach is applied to integrate emulation with simulation so that they can overcome the limitations of physical setup. The symbiotic method is used to improve the capabilities of a specific emulator, Mininet. In this case, Mininet can be used to run applications directly on the virtual machines and software switches, with network connectivity represented by detailed simulation at scale. We also propose a method for using the symbiotic approach to coordinate separate Mininet instances, each representing a different set of the overlapping network flows. This approach provides a significant improvement to the scalability of the network experiments.
33

Modelování směrovacího protokolu Babel / Modelling of Babel Routing Protocol

Rek, Vít January 2015 (has links)
This thesis deals with the simulation of a Babel routing protocol. The goal is to create implementation of simulation model for OMNeT++ simulator. The text includes a description of the protocol and basic principles of computer network simulation in OMNeT++ environment using an INET library. Furthermore, the text discussed existing implementations and submits a proposal of a simulation model, followed by description of its implementation. Finally, the correctness of created model is verified.
34

Modelování směrovacího protokolu EIGRP / Modelling of EIGRP Routing Protocol

Bloudíček, Jan January 2014 (has links)
The network simulation allows analysis of the computer networks behavior and configured protocols. This thesis focuses on the EIGRP routing protocol and its integration into the OMNeT++ simulation enviroment. The text includes a detailed description of the protocol and its configuration on Cisco devices. Furthermore, the text focuses on design of extension that supports routing protocol. The following describes implementation of the protocol according to design. Finally, the implemented solution is compared with the output of real devices.
35

HydraNetSim : A Parallel Discrete Event Simulator

Fahad Azeemi, Muhammad January 2012 (has links)
Discrete event simulation is the most suitable type of simulation for analyzing a complex system where changes happen at discrete time instants. Discrete event simulation is a major experimental methodology in several scientific and engineering domains. Unfortunately, a conventional discrete event simulator cannot meet with increasing demands of computational or the structural complexities of modern systems such as peer-to-peer (P2P) systems; therefore parallel discrete event simulation has been a focus of researchers for several decades. Unfortunately, no simulator is regarded as a standard which can satisfy the demands of all kinds of applications. Thus while given a simulator yields good performance for a specific kind of applications, it may failed to be efficient for other kinds of applications. Furthermore, although technological advancements have been made in the multi-core computing hardware, none of the mainstream P2P discrete event simulators is designed to support parallel simulation that exploits multi-core architectures. The proposed HydraNetSim parallel discrete event simulator (PDES) is a step toward addressing these issues. Developing a simulator which can support very large numbers of nodes to realize a massive P2P system, and can also execute in parallel is a non-trivial task. The literature review in this thesis gives a broad overview of prevailing approaches to dealing with the tricky problems of simulating a massive, large, and rapidly changing system, and provides a foundation for adopting a suitable architecture for developing a PDES. HydraNetSim is a discrete event simulator which allows parallel simulation and exploits the capabilities of parallelization of modern computing hardware. It is based on a novel master/slave paradigm. It divides the simulation model into a number of specific slaves (a cluster of processes) considering the number of cores provided by the underlying computing hardware. Each slave can be assigned to a specific CPU on a different core. Synchronization of the slaves is achieved by proposing a variant of the classic Null-Message Algorithm (NMA) with a focus on keeping the synchronization overhead as low as possible. Furthermore, HydraNetSim provides log information for debugging purposes and introduces a new mechanism of gathering and writing simulation results to a database. The experimental results show that the sequential counterpart of HydraNetSim (SDES) takes 41.6% more time than HydraNetSim-2Slave and 23.6% than HydraNetSim-3Slave. HydraNetSim-2Slave is 1.42 times faster, consumes 1.18 times more memory, and supports 2.02 times more nodes than a sequential discrete event simulator (SDES). Whereas, HydraNetSim-3Slave executes 1.24 times faster, consumes 2.08 times more memory, and supports 3.04 times more nodes than SDES. The scaling factor of HydraNetSim is ⌈(β-1)*102.04%⌉ of the maximum numbered of nodes supported by SDES; where β is the number of slaves. / Diskret händelsesimulering är den mest passande typen av simulering för att analysera ett komplext system där förändringar sker i diskreta tidpunkter. Diskret händelsesimulering är en stor experimentell metod i flera vetenskapliga och tekniska områden. Tyvärr kan en konventionell diskret händelse simulator uppfyller inte med ökande krav på beräkningsprogram eller strukturella komplexiteten av moderna system som peer-to-peer (P2P) system, och därför parallellt diskret händelse simulering har varit ett fokus för forskare under flera årtionde. Tyvärr ingen simulator ansåg som en standard som kan uppfylla kraven på alla typer av applikationer. Så samtidigt få en simulator ger bra prestanda för en specifik typ av applikationer kan det inte vara effektivt för andra typer av applikationer. Även om tekniska framsteget har gjorts i multi-core datorhårdvara, är ingen av de vanliga P2P händelsestyrd simulatorer för att stödja parallella simulering som utnyttjar flera kärnor arkitekturer. Den föreslagna HydraNetSim parallella diskret händelse simulator (PDES) är ett steg mot att fokusera på dessa frågor. Utveckla en simulator som kan stödja ett mycket stort antal noder för att realisera en massiv P2P-system, och kan även utföra parallellt är en icke-trivial uppdrag. Litteraturstudien i denna tesen ger en bred översikt över aktuell metoder för att hantera de svåra problem som simulerar en massiv, stor och snabbt ändra system och ger en grund för att adoptera en passande struktur för att utveckla ett PDES. HydraNetSim är en diskret händelse simulator som gör det möjligt parallellt simulering och utnyttjar funktionerna i parallellisering av modern datorhårdvara. Det är baserat på en ny master / slav paradigm. Den delar simuleringsmodellen i ett antal specifika slavar (ett kluster av processer) med tanke på antalet kärnor som tillhandahålls av den underliggande datorhårdvara. Varje slav kan tilldelas en specifik CPU på en annan kärna. Synkronisering av slavarna uppnås genom att föreslå en variant av det klassiska Null-Message Algorithm (NMA) med fokus på att hålla simuleringen overhead så lågt som möjligt. Dessutom ger HydraNetSim log information för felsökning ändamål och inför en ny mekanism för att samla in och skriva simuleringar resultat till en databas. De experimentella resultaten visar att den sekventiella motsvarigheten till HydraNetSim (SDES) tar 41,6% mer tid än HydraNetSim-2Slave och 23,6% mindre än HydraNetSim-3Slave. HydraNetSim-2Slave är 1,42 gånger snabbare, förbrukar 1,18 gånger mer minne, och stöder 2.02 gånger fler noder än en sekventiell händelsestyrd simulator (SDES). I HydraNetSim-3Slave kör 1.24 gånger snabbare, förbrukar 2,08 gånger mer minne, och stöder 3,04 gånger fler noder än SDES. Skalfaktorn av HydraNetSim är ⌈(β-1)*102.04%⌉ av den maximala numrerade noder som stöds av SDES; där β är antalet slavar.
36

Parasitic Tracking Mobile Wireless Networks / Parasitisk spårning av mobila trådlösa nätverk

Xu, Bowen January 2021 (has links)
Along with the growth and popularity of mobile networks, users enjoy more convenient connection and communication. However, exposure of user presence in mobile networks is becoming a major concern and motivated a plethora of LPPM Location Privacy Protection Mechanisms (LPPMs) have been proposed and analysed, notably considering powerful adversaries with rich data at their disposal, e.g., mobile network service providers or Location Based Services (LBS). In this thesis, we consider a complementary challenge: exposure of users to their peers or other nearby devices. In other words, we are concerned with devices in the vicinity that happen to eavesdrop (or learn in the context of a peer-to-peer protocol execution) MAC/IP addresses or Bluetooth device names, to link user activities over a large area (e.g., a city), and especially when a small subset of the mobile network devices parasitically logged such encounters, even scattered in space and time, and collaboratively breach user privacy. The eavesdroppers can be honest-but-curious network infrastructures such as wireless routers, base stations, or adversaries equipped with Bluetooth or WiFi sniffers. The goal of this thesis is to simulate location privacy attacks for mobile network and measure the location privacy exposure under these attacks. We consider adversaries with varying capabilities, e.g., number of deployable eavesdroppers in the network and coverage of eavesdropper, and evaluate the effect of such adversarial capabilities on privacy exposure of mobile users. We evaluate privacy exposure with two different metrics, i.e., Exposure Degree and Average Displacement Error (ADE).We use Exposure Degree as a preliminary metric to measure the general coverage of deployed eavesdroppers in the considered area. ADE is used to measure the average distance between user’s actual trace points and user’s trajectory predictions. We simulate three attack cases in our scheme. In the first case, we assume the attacker only acquires the collected data from users. We vary the number of receivers to test attack capacity. Exposure Degree is used to evaluate location privacy in this case. For the second and third cases, we assume the attacker also has some knowledge about users’ history traces. Thus, the attacker can utilize machine learning models to make prediction about user’s trace. We leverage Long Short-Term Memory (LSTM) neural network and Hidden Markov Model (HMM) to conduct real-time prediction and Heuristic LSTM to reconstruct more precise user trajectories. ADE is used to evaluate the degree of location privacy exposure in this cases. The experiment results show that LSTM performs better than HMM on trace prediction in our scheme. Higher number of eavesdroppers would decrease the ADE of LSTM model (increase user location privacy exposure). The increase of communication range of receiver can decrease ADE but will incur ADE increase if communication range successively increases. The Heuristic LSTM model performs better than LSTM to abuse user location privacy under the situation that the attacker reconstructs more precise users trajectories based on the in-complete observed trace sequence. / Tillsammans med mobilnätens tillväxt och popularitet, njuter användarna av bekvämare anslutning och kommunikation. Exponering av användarnas närvaro i mobilnät blir emellertid ett stort bekymmer och motiverade en uppsjö av Location Privacy Protection Mechanisms (LPPM) har föreslagits och analyserats, särskilt med tanke på kraftfulla motståndare med rik data till sitt förfogande, t.ex. mobila nätverksleverantörer eller Platsbaserade tjänster (LBS). I denna avhandling betraktar vi en kompletterande utmaning: exponering av användare för sina kamrater eller andra närliggande enheter. Med andra ord, vi är bekymrade över enheter i närheten som råkar avlyssna (eller lära sig i samband med exekvering av peer-to-peer-protokoll) MAC/IP-adresser eller Bluetooth-enhetsnamn, för att länka användaraktiviteter över ett stort område ( t.ex. en stad), och särskilt när en liten delmängd av mobilnätverksenheterna parasitiskt loggar sådana möten, till och med spridda i rymden och tiden, och tillsammans kränker användarnas integritet. Avlyssningarna kan vara ärliga men nyfikna nätverksinfrastrukturer som trådlösa routrar, basstationer eller motståndare utrustade med Bluetooth eller WiFi-sniffare. Målet med denna avhandling är att simulera platssekretessattacker för mobilnät och mäta platsens integritetsexponering under dessa attacker. Vi betraktar motståndare med varierande kapacitet, t.ex. antalet utplacerbara avlyssnare i nätverket och täckning av avlyssning, och utvärderar effekten av sådana motståndaregenskaper på mobilanvändares integritetsexponering. Vi utvärderar integritetsexponering med två olika mått, dvs. exponeringsgrad och genomsnittligt förskjutningsfel (ADE). Vi använder exponeringsgrad som ett preliminärt mått för att mäta den allmänna täckningen av utplacerade avlyssnare i det aktuella området. ADE används för att mäta det genomsnittliga avståndet mellan användarens faktiska spårpunkter och användarens banprognoser. Vi simulerar tre attackfall i vårt schema. I det första fallet antar vi att angriparen bara hämtar insamlad data från användare. Vi varierar antalet mottagare för att testa attackkapacitet. Exponeringsgrad används i detta fall för att utvärdera sekretess på plats. För det andra och tredje fallet antar vi att angriparen också har viss kunskap om användares historikspår. Således kan angriparen använda maskininlärningsmodeller för att förutsäga användarens spår. Vi utnyttjar Long Short-Term Memory (LSTM) neuralt nätverk och Hidden Markov Model (HMM) för att genomföra förutsägelser i realtid och Heuristic LSTM för att rekonstruera mer exakta användarbanor. ADE används för att utvärdera graden av platsexponering i detta fall. Experimentresultaten visar att LSTM presterar bättre än HMM på spårprognoser i vårt schema. Ett högre antal avlyssnare skulle minska ADE för LSTM -modellen (öka användarplatsens integritetsexponering). Ökningen av mottagarens kommunikationsområde kan minska ADE men kommer att medföra ADE -ökning om kommunikationsområdet successivt ökar. Den heuristiska LSTM-modellen fungerar bättre än LSTM för att missbruka användarplatsens integritet under situationen att angriparen rekonstruerar mer exakta användarbanor baserat på den fullständigt observerade spårningssekvensen.
37

Anticipating and Adapting to Increases in Water Distribution Infrastructure Failure Caused by Interdependencies and Heat Exposure from Climate Change

January 2019 (has links)
abstract: This dissertation advances the capability of water infrastructure utilities to anticipate and adapt to vulnerabilities in their systems from temperature increase and interdependencies with other infrastructure systems. Impact assessment models of increased heat and interdependencies were developed which incorporate probability, spatial, temporal, and operational information. Key findings from the models are that with increased heat the increased likelihood of water quality non-compliances is particularly concerning, the anticipated increases in different hardware components generate different levels of concern starting with iron pipes, then pumps, and then PVC pipes, the effects of temperature increase on hardware components and on service losses are non-linear due to spatial criticality of components, and that modeling spatial and operational complexity helps to identify potential pathways of failure propagation between infrastructure systems. Exploring different parameters of the models allowed for comparison of institutional strategies. Key findings are that either preventative maintenance or repair strategies can completely offset additional outages from increased temperatures though-- improved repair times reduce overall duration of outages more than preventative maintenance, and that coordinated strategies across utilities could be effective for mitigating vulnerability. / Dissertation/Thesis / Doctoral Dissertation Civil, Environmental and Sustainable Engineering 2019
38

Platforma pro mobilní agenty v bezdrátových senzorových sítích / Platform for Mobile Agents in Wireless Sensor Networks

Horáček, Jan January 2009 (has links)
This work deals with implementation of an agent platform, which is able to run agent code in wireless sensor networks. Implementation has been done for MICAz platform, which uses TinyOS operating system for developing applications. This work contains list of chosen TinyOS parts and illustrates, how such a platform can be used for our purposes. We will describe main features of ALLL language and we will also demonstrate some examples of agents.
39

A Hybrid Routing Protocol For Communications Among Nodes Withhigh Relative Speed In Wireless Mesh Networks

Peppas, Nikolaos 01 January 2007 (has links)
Wireless mesh networks (WMN) is a new promising wireless technology which uses already available hardware and software components. This thesis proposes a routing algorithm for military applications. More specifically, a specialized scenario consisting of a network of flying Unmanned Aerial Vehicles (UAVs) executing reconnaissance missions is investigated. The proposed routing algorithm is hybrid in nature and uses both reactive and proactive routing characteristics to transmit information. Through simulations run on a specially built stand alone simulator, based on Java, packet overhead, delivery ratio and latency metrics were monitored with respect to varying number of nodes, node density and mobility. The results showed that the high overhead leads to high delivery ratio while latency tends to increase as the network grows larger. All the metrics revealed sensitivity in high mobility conditions.
40

The Distributed Open Network Emulator: Applying Relativistic Time

Bergstrom, Craig Casey 11 September 2006 (has links)
The increasing scale and complexity of network applications and protocols motivates the need for tools to aid in the understanding of network dynamics at similarly large scales. While current network simulation tools achieve large scale modeling, they do so by ignoring much of the intra-program state that plays an important role in the overall system's behavior. This work presents The Distributed Open Network Emulator, a scalable distributed network model that incorporates application program state to achieve high fidelity modeling. The Distributed Open Network Emulator, or DONE for short, is a parallel and distributed network simulation-emulation hybrid that achieves both scalability and the capability to run existing application code with minimal modification. These goals are accomplished through the use of a protocol stack extracted from the Linux kernel, a new programming model based on C, and a scaled real-time method for distributed synchronization. One of the primary challenges in the development of DONE was in reconciling the opposing requirements of emulation and simulation. Emulated code directly executes in real-time which progresses autonomously. In contrast, simulation models are forced ahead by the execution of events, an explicitly controlled mechanism. Relativistic time is used to integrate these two paradigms into a single model while providing efficient distributed synchronization. To demonstrate that the model provides the desired traits, a series of experiments are described. They show that DONE can provide super-linear speedup on small clusters, nearly linear speedup on moderate sized clusters, and accurate results when tuned appropriately. / Master of Science

Page generated in 0.1107 seconds