• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 22
  • 8
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 22
  • 18
  • 13
  • 13
  • 12
  • 11
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Security in low power wireless networks : Evaluating and mitigating routing attacks in a reactive, on demand ad-hoc routing protocol / Säkerheten i trådlösa lågenerginätverk : Utvärdering och begränsning av routing attacker i ett reaktivt ad-hoc routing protokoll

Fredriksson, Tony, Ljungberg, Niklas January 2017 (has links)
Using low energy devices to communicate over the air presents many challenges to reach security as resources in the world of Internet Of Things (IoT) are limited. Any extra overhead of computing or radio transmissions that extra security might add affects cost of both increased computing time and energy consumption which are all scarce resources in IoT. This thesis details the current state of security mechanisms built into the commercially available protocol stacks Zigbee, Z-wave, and Bluetooth Low Energy, and collects implemented and proposed solutions to common ways of attacking systems built on these protocol stacks. Attacks evaluated are denial of service/sleep, man-in-the-middle, replay, eavesdropping, and in mesh networks, sinkhole, black hole, selective forwarding, sybil, wormhole, and hello flood. An intrusion detection system is proposed to detect sinkhole, selective forwarding, and sybil attacks in the routing protocol present in the communication stack Rime implemented in the operating system Contiki. The Sinkhole and Selective forwarding mitigation works close to perfection in larger lossless networks but suffers an increase in false positives in lossy environments. The Sybil Detection is based on Received Signal Strength and strengthens the blacklist used in the sinkhole and selective forwarding detection, as a node changing its ID to avoid the blacklist will be detected as in the same geographical position as the blacklisted node.
52

THE PRINCIPLE OF DATA FLOW EQUILIBRIUM FOR RESERVOIR MINIMIZATION IN PERIODIC INTERMITTENT NETWORKS

Tahboub, Omar Y. 29 April 2013 (has links)
No description available.
53

Security Issues in Network Virtualization for the Future Internet

Natarajan, Sriram 01 September 2012 (has links)
Network virtualization promises to play a dominant role in shaping the future Internet by overcoming the Internet ossification problem. Since a single protocol stack cannot accommodate the requirements of diverse application scenarios and network paradigms, it is evident that multiple networks should co-exist on the same network infrastructure. Network virtualization supports this feature by hosting multiple, diverse protocol suites on a shared network infrastructure. Each hosted virtual network instance can dynamically instantiate custom set of protocols and functionalities on the allocated resources (e.g., link bandwidth, CPU, memory) from the network substrate. As this technology matures, it is important to consider the security issues and develop efficient defense mechanisms against potential vulnerabilities in the network architecture. The architectural separation of network entities (i.e., network infrastructures, hosted virtual networks, and end-users) introduce set of attacks that are to some extent different from what can be observed in the current Internet. Each entity is driven by different objectives and hence it cannot be assumed that they always cooperate to ensure all aspects of the network operate correctly and securely. Instead, the network entities may behave in a non-cooperative or malicious way to gain benefits. This work proposes set of defense mechanisms that addresses the following challenges: 1) How can the network virtualization architecture ensure anonymity and user privacy (i.e., confidential packet forwarding functionality) when virtual networks are hosted on third-party network infrastructures?, and 2) With the introduction of flexibility in customizing the virtual network and the need for intrinsic security guarantees, can there be a virtual network instance that effectively prevents unauthorized network access by curbing the attack traffic close to the source and ensure only authorized traffic is transmitted?. To address the above challenges, this dissertation proposes multiple defense mechanisms. In a typical virtualized network, the network infrastructure and the virtual network are managed by different administrative entities that may not trust each other, raising the concern that any honest-but-curious network infrastructure provider may snoop on traffic sent by the hosted virtual networks. In such a scenario, the virtual network might hesitate to disclose operational information (e.g., source and destination addresses of network traffic, routing information, etc.) to the infrastructure provider. However, the network infrastructure does need sufficient information to perform packet forwarding. We present Encrypted IP (EncrIP), a protocol for encrypting IP addresses that hides information about the virtual network while still allowing packet forwarding with longest-prefix matching techniques that are implemented in commodity routers. Using probabilistic encryption, EncrIP can avoid that an observer can identify what traffic belongs to the same source-destination pairs. Our evaluation results show that EncrIP requires only a few MB of memory on the gateways where traffic enters and leaves the network infrastructure. In our prototype implementation of EncrIP on GENI, which uses standard IP header, the success probability of a statistical inference attack to identify packets belonging to the same session is less than 0.001%. Therefore, we believe EncrIP presents a practical solution for protecting privacy in virtualized networks. While virtualizing the infrastructure components introduces flexibility by reprogramming the protocol stack, it doesn't directly solve the security issues that are encountered in the current Internet. On the contrary, the architecture increases the chances of additive vulnerabilities, thereby increasing the attack space to exploit and launch several attacks. Therefore it is important to consider a virtual network instance that ensures only authorized traffic is transmitted and attack traffic is squelched as close to their source as possible. Network virtualization provides an opportunity to host a network that can guarantee such high-levels of security features thereby protecting both the end systems and the network infrastructure components (i.e., routers, switches, etc.). In this work, we introduce a virtual network instance using capabilities-based network which present a fundamental shift in the security design of network architectures. Instead of permitting the transmission of packets from any source to any destination, routers deny forwarding by default. For a successful transmission, packets need to positively identify themselves and their permissions to each router in the forwarding path. The proposed capabilities-based system uses packet credentials based on Bloom filters. This high-performance design of capabilities makes it feasible that traffic is verified on every router in the network and most attack traffic can be contained within a single hop. Our experimental evaluation confirm that less than one percent of attack traffic passes the first hop and the performance overhead can be as low as 6% for large file transfers. Next, to identify packet forwarding misbehaviors in network virtualization, a controller-based misbehavior detection system is discussed as part of the future work. Overall, this dissertation introduces novel security mechanisms that can be instantiated as inherent security features in the network architecture for the future Internet. The technical challenges in this dissertation involves solving problems from computer networking, network security, principles of protocol design, probability and random processes, and algorithms.
54

A new link lifetime estimation method for greedy and contention-based routing in mobile ad hoc networks

Noureddine, H., Ni, Q., Min, Geyong, Al-Raweshidy, H. January 2014 (has links)
No / Greedy and contention-based forwarding schemes were proposed for mobile ad hoc networks (MANETs) to perform data routing hop-by-hop, without prior discovery of the end-to-end route to the destination. Accordingly, the neighboring node that satisfies specific criteria is selected as the next forwarder of the packet. Both schemes require the nodes participating in the selection process to be within the area that confronts the location of the destination. Therefore, the lifetime of links for such schemes is not only dependent on the transmission range, but also on the location parameters (position, speed and direction) of the sending node and the neighboring node as well as the destination. In this paper, we propose a new link lifetime prediction method for greedy and contention-based routing which can also be utilized as a new stability metric. The evaluation of the proposed method is conducted by the use of stability-based greedy routing algorithm, which selects the next hop node having the highest link stability.
55

Forwarding Strategies in Information Centric Networking

Sadek, Ahmed January 2016 (has links)
The Internet of the 21th century is a different version from the original Internet. The Internet is becoming more and more a huge distribution network for large quantities of data (Photos, Music, and Video) with different types of connections and needs. TCP/IP the work horse for the Internet was intended as a vehicle to transport best effort Connection oriented data where the main focus is about transporting data from point A to point B regardless of the type of data or the nature of path.  Information Centric Networking (ICN) is a new paradigm shift in a networking where the focus in networking is shifted from the host address to the content name. The current TCP/IP model for transporting data depends on establishing an end to end connection between client and server. However, in ICN, the client requests the data by name and the request is handled by the network without the need to go each time to a fixed server address as each node in the network can serve data. ICN works on a hop by hop basis where each node have visibility over the content requested enabling it to take more sophisticated decisions in comparison to TCP/IP where the forwarding node take decisions based on the source and destination IP addresses. ICN have different implementations projects with different visions and one of those projects is Named Data Networking (NDN) and that’s what we use for our work. NDN/ICN architecture consists of different layers and one of those layers is the Forwarding Strategy (FS) layer which is responsible for deciding how to forward the coming request/response. In this thesis we implement and simulate three Forwarding Strategies (Best Face Selection, Round Robin, and Weighted Round Robin) and investigate how they can adapt to changes in link bandwidth with variable traffic rate. We performed a number of simulations using the ndnSIMv2.1 simulator. We concluded that Weighted Round Robin offers high throughput and reliability in comparison to the other two strategies. Also, the three strategies offer better reliability than using a single static face and offer lower cost than using the broadcast strategy. We also concluded that there is a need for a dynamic congestion control algorithm that takes into consideration the dynamic nature of ICN. / 2000-talets Internet är en annan version av det ursprungliga Internet. Internet blir mer och mer ett stort distributionsnät för stora mängder data (foton, musik och video) med olika typer av anslutningar och behov. TCP / IP är arbetshäst för Internet var tänkt som ett fordon för att transportera best effort Anslutning orienterade uppgifter där huvudfokus handlar om att transportera data från punkt A till punkt B, oavsett vilken typ av data eller vilken typ av väg. Information Centric Nätverk (ICN) är ett nytt paradigmskifte inom nätverk där fokus i nätverket flyttas från värdadressen till innehållets namn. Den aktuella TCP / IP-modellen för transport av data beror på att etablera en anslutning mellan klient och server (s.k. end-to-end). I ICN begär klienten data med namn och begäran hanteras av nätverket utan att behöva gå till en fix serveradress eftersom varje nod i nätverket kan besvara en begäran med data. ICN arbetar på en ”hop by hop” basis där varje nod har överblick över det begärda innehållet, vilket gör det möjligt att ta mer avancerade beslut i jämförelse med TCP / IP, där den vidarebefordrande nodens beslut fattas baserat på källans och destinationens IP-adresser. Det finns olika implementeringar av ICN med olika visioner och en av dessa implementeringar heter Named Data Networking (NDN) och det är vad vi använder för vårt arbete. NDNs / ICNs arkitektur består av olika lager och ett av dessa lager är Forwarding Strategies (FS) där vi definierar de åtgärder vi vidtar på varje begäran / svar. I detta projekt implementeras och simuleras tre Forwarding strategier (Best Face Selection, Round Robin, och Weighted Round Robin) och undersöks hur de kan anpassa sig till förändringar i länkbandbredd med konstant och variabel trafikhastigheten. Vi utfört ett antal simuleringar med hjälp av ndnSIMv2.1 simulatorn. Vi drog slutsatsen att Weighted Round Robin erbjuder hög genomströmning och tillförlitlighet i jämförelse med de två andra strategierna.  De tre strategierna erbjuder även högre tillförlitlighet än att använda ett enda statiskt  interface och erbjuder lägre kostnad än att använda broadcast strategin. Vi konstaterade också att det finns ett behov av en dynamisk ”congestion control”-algoritm som tar hänsyn till ICNs dynamiska karaktär.
56

Processkartläggning i samband med verksamhetsflytt : En fallstudie på SAAB Training & Simulation och SAAB Avionics Systems spedition- och ankomstprocesser / Process mapping in the context of operation movement : A case study on the forwarding and arrival process of SAAB Training & Simulation and SAAB Avionics Systems

Rådegård, Tobias, Oscarsson, David January 2016 (has links)
Syfte - Syftet med detta examensarbete är att studera hur idag två helt skilda verksamheter; SAAB Training & Simulation och SAAB Avionics Systems, arbetar med spedition- och ankomstprocesser. Detta för att undersöka bakgrunden till vad som utformat processerna och för att säkerställa att dessa tillfredsställs då de tillsammans ska bedriva godsmottagning från och med 2017. I studien ingår även att ge förslag på hur godsmottagningen organisatoriskt ska bedrivas vid en integrerad struktur. Metod - En jämförelse av de båda företagens sätt att arbeta utförs genom en flödeskartläggning på vardera företag utifrån principerna för en fallstudie. För att möjliggöra kartläggningen och för att samla fakta har intervjuer genomförts löpande under studiens gång. En litteraturstudie har även genomförts för att kunna koppla verkligheten mot aktuella teorier och ge välgrundade förslag. Resultat - Studien mynnar ut i två processkartor. En för hur SAAB organisatoriskt ska bedriva godsmottagning direkt efter att SAAB Avionics flyttat sin verksamhet och en för hur godsmottagningen organisatoriskt ska bedrivas inom en femårsperiod. Till de båda processkartorna följer rekommendationer och konkreta åtgärdsförslag för hur detta ska gå till och vad som bör beaktas vid utformning av spedition- och ankomstprocesser mellan två företag. Implikationer - Eftersom resultatet är fördelat på två tidsperioder kan de förslag och resultat som avser den första tidsperioden betraktas som applicerbara, dessa resultat har fokuserats till att skapa ett funktionellt flöde. Medans de förslag och resultat som avser den andra tidsperioden kan ses som en förstudie för fortsatt forskning eller vidare fördjupning ifrån företagens sida. Begränsningar - Då en fullständig analys av hur den gemensamma godsmottagningen ska bedrivas är för omfattande, är denna studie begränsad till hur detta organisatoriskt ska bedrivas. Nyckelord - Godsmottagning, verksamhetsflytt, processkartläggning, spedition- och ankomst, kundkrav. / Purpose - The purpose of this thesis is to study how two separate operations; SAAB Training & Simulation and SAAB Avionics Systems, are managing forwarding and arrival processes. This in order to investigate the background of what have framed the processes and thereby enable them to corporate on a common goods reception starting 2017. Further, this study includes organizational suggestions on how the integrated goods reception should be managed. Method - A comparison between the two companies were made by conducting a process map on each of the operations. This approach was conducted in accordance with the principles of a case study. By continuously collecting data throughout in-depth interviews, a clear connection to the purpose was ensured. Further, a literature review was conducted in order to connect reality to current theories, and thereby present legitimate suggestions. Findings - This study propose two process maps. The first one focus on how SAAB should manage their goods reception from an organizational perspective, immediately after SAAB Avionics move their operations. Contrary to the first, the second process map aims to present a solution during a five year-period of time. Both maps include recommendations and concrete suggestions on how this should be done, as well as what companies should take into account when designing forwarding and arrival processes in general. Implications - Because of how the result is distributed between two time periods, the first time frame could only be regarded as applicable, as these are the results that have been used to create a functioning flow. The implications and results regarding the second time frame could thereby be seen as a pre-study, or suggestions for future research on the basis of the company’s own incentives. Delimitations - Due to the exhaustive character of conducting a full analysis of the common goods reception, this study is delimited to how this process should be managed from an organizational perspective. Key terms - Goods reception, moving of an operation, process mapping, forwarding and arrival processes, customer requirements.
57

An empirical analysis of the culture in DHL Global Forwarding and concrete suggestions on how to develop the culutre into strategic example

Joubert, Melanie 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2014. / Since 2006, DGF struggled with a diverse and disjointed corporate culture. In this industry, the diversity did not bring about a competitive advantage – on the contrary, it was a negative influence on the organisation’s sustainable performance. It was clear that there were many different cultures, different sub-cultures and different ways of working within DGF, which affected the overall company’s effectiveness, efficiencies and performance. The core research question for the purpose of this research assignment was, “What is the existing culture within DGF, how did this culture evolve and what can be done to change the culture into a strategic example?” The empirical analysis made use of quantitative research, where the majority of the research outcomes were based on the findings from two types of questionnaires. The first questionnaire, “Beehive 2.0”, was used as an analytical tool to analyse DGF’s culture and the second questionnaire was the DGF Employee Opinion Survey (EOS). This survey allowed a safe environment in which the employees voiced their opinions in terms of the organisation. Initial informal one-on-one interviews, open forum Senior Leadership Team discussions and group discussions were held to determine the team’s general approach, how individuals felt towards the organisational culture at the time of the research and whether there was a real opportunity for an improved culture. The Senior Leadership Team identified organisational needs in terms of trust building, changing the culture and improving staff moral. It was clear through the discussions that people were cautious to speak up about organisational culture issues in a group environment. A lot of hurt came to light and without the ability to ensure confidentiality and privacy, a true reflection of how people perceived the culture would not have surfaced. As a result, the quantitative data gathered formed only a small part of the total data gathered for this research. It became apparent through the initial informal discussions as previously referred to, that there was a lack of trust and collaboration, and a culture of fear amongst the employees. The series of acquisitions DGF has been involved with over the years, without solid change management to ensure a unified culture, resulted in many different cultures and sub-cultures within the company. Change management coupled with a unified culture was promised to the employees prior to the acquisitions, but it never materialized. This left the employees uncertain and weary to trust their leaders. Communication throughout the organisation was poor and there was a top down approach to strategy creation. Through this, employees did not feel empowered to make their own decisions and this negatively influenced their trust in the organisation. There was little focus on talent creation, managers did not take the time to impart knowledge or develop employees and in general, employees felt neglected, under-valued and unappreciated. Employee engagement was very low. Most of the results obtained from the EOS and Beehive survey research confirmed the initial needs identified by the Senior Management Team and confirmed the reason for the low morale and negativity in DGF. Employees have lost confidence in a better tomorrow within DGF, and felt that it will not help to speak up anymore. No solid action plans came from making their voices heard in the past, they were concerned about their job security and previous EOS survey results has shown that less than a third of employees felt that positive change could still happen. There was, therefore, a dire need to identify a cultural framework for DGF. Once the new economy leadership culture was selected as the most optimal framework, the existing DGF culture needs to transition into the selected framework. The change of culture was therefore needed to restore the trust of the employees and achieve sustainable competitiveness in DGF, since employee satisfaction and performance are directly linked to organisational culture.
58

ULTRA-FAST AND MEMORY-EFFICIENT LOOKUPS FOR CLOUD, NETWORKED SYSTEMS, AND MASSIVE DATA MANAGEMENT

Yu, Ye 01 January 2018 (has links)
Systems that process big data (e.g., high-traffic networks and large-scale storage) prefer data structures and algorithms with small memory and fast processing speed. Efficient and fast algorithms play an essential role in system design, despite the improvement of hardware. This dissertation is organized around a novel algorithm called Othello Hashing. Othello Hashing supports ultra-fast and memory-efficient key-value lookup, and it fits the requirements of the core algorithms of many large-scale systems and big data applications. Using Othello hashing, combined with domain expertise in cloud, computer networks, big data, and bioinformatics, I developed the following applications that resolve several major challenges in the area. Concise: Forwarding Information Base. A Forwarding Information Base is a data structure used by the data plane of a forwarding device to determine the proper forwarding actions for packets. The polymorphic property of Othello Hashing the separation of its query and control functionalities, which is a perfect match to the programmable networks such as Software Defined Networks. Using Othello Hashing, we built a fast and scalable FIB named \textit{Concise}. Extensive evaluation results on three different platforms show that Concise outperforms other FIB designs. SDLB: Cloud Load Balancer. In a cloud network, the layer-4 load balancer servers is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. We built a software load balancer with Othello Hashing techniques named SDLB. SDLB is able to accomplish two functionalities of the SDLB using one Othello query: to find the designated server for packets of ongoing sessions and to distribute new or session-free packets. MetaOthello: Taxonomic Classification of Metagenomic Sequences. Metagenomic read classification is a critical step in the identification and quantification of microbial species sampled by high-throughput sequencing. Due to the growing popularity of metagenomic data in both basic science and clinical applications, as well as the increasing volume of data being generated, efficient and accurate algorithms are in high demand. We built a system to support efficient classification of taxonomic sequences using its k-mer signatures. SeqOthello: RNA-seq Sequence Search Engine. Advances in the study of functional genomics produced a vast supply of RNA-seq datasets. However, how to quickly query and extract information from sequencing resources remains a challenging problem and has been the bottleneck for the broader dissemination of sequencing efforts. The challenge resides in both the sheer volume of the data and its nature of unstructured representation. Using the Othello Hashing techniques, we built the SeqOthello sequence search engine. SeqOthello is a reference-free, alignment-free, and parameter-free sequence search system that supports arbitrary sequence query against large collections of RNA-seq experiments, which enables large-scale integrative studies using sequence-level data.
59

Commande d'un véhicule hypersonique à propulsion aérobie : modélisation et synthèse

Poulain, François 28 March 2012 (has links) (PDF)
La propulsion aérobie à grande vitesse est depuis longtemps identifiée comme l'un des prochains sauts technologiques à franchir dans le domaine des lanceurs spatiaux. Cependant, les véhicules hypersoniques (HSV) fonctionnant dans des domaines de vitesse extrêmement élevées, de nombreuses contraintes et incertitudes entravent les garanties des propriétés des contrôleurs. L'objet de cette thèse est d'étudier la synthèse de commande d'un tel véhicule.Pour commencer, il s'agit de définir un modèle représentatif d'un HSV exploitable pour la commande. Dans ce travail, nous construisons deux modèles de HSV. Un pour la simulation en boucle fermée, et le second afin de poser précisément le problème de commande.Nous proposons ensuite une synthèse de commande de la dynamique longitudinale dans le plan vertical de symétrie. Celle-ci est robuste aux incertitudes de modélisation, tolérante à des saturations, et n'excite pas les dynamiques rapides négligées. Ses propriétés sont évaluées sur différents cas de simulation. Puis, une extension est proposée afin de résoudre le problème de commande simultanée des dynamiques longitudinale et latérale, sous les mêmes contraintes.Ce résultat est obtenu par une assignation de fonction de Lyapunov, suite à une étude des dynamiques longitudinale et latérale. Par ailleurs, pour traiter les erreurs de poursuite dues aux incertitudes de modélisation, nous nous intéressons au problème de régulation asymptotique robuste par retour d'état. Nous montrons que cette régulation peut être accomplie en stabilisant le système augmenté d'un intégrateur de la sortie. Ceci constitue une extension de la structure de contrôle proportionnel-intégral au cas des systèmes non linéaires.
60

Virtual Frameworks for Source Migration

Chi, Jack January 2004 (has links)
<em>Virtual Frameworks</em> for source migration is a methodology to extract classes and interfaces from one or more frameworks used by an application. After migration, a new set of frameworks called virtual frameworks can replace the original frameworks used. The classes and interfaces extracted are used to create a proxy layer for these new frameworks. The application then depends on this proxy layer, and through it the new frameworks, rather than on the original frameworks. A combination of three patterns: Bridge, Adapter, and Proxy are used in these new frameworks. By doing so the changes made to the application source code are minimized during migration.

Page generated in 0.0534 seconds