• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 133
  • 133
  • 133
  • 65
  • 56
  • 28
  • 26
  • 26
  • 22
  • 21
  • 21
  • 20
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modeling Adversarial Insider Vehicles in Mix Zones

Plewtong, Nicholas 01 March 2018 (has links) (PDF)
Security is a necessity when dealing with new forms of technology that may not have been analyzed from a security perspective. One of the latest growing technological advances are Vehicular Ad-Hoc Networks (VANETs). VANETs allow vehicles to communicate information to each other wirelessly which allows for an increase in safety and efficiency for vehicles. However, with this new type of computerized system comes the need to maintain security on top of it. In order to try to protect location privacy of the vehicles in the system, vehicles change pseudonyms or identifiers at areas known as mix zones. This thesis implements a model that characterizes the attack surface of an adversarial insider vehicle inside of a VANET. This adversarial vehicle model describes the interactions and effects that an attacker vehicle can have on mix zones in order to lower the overall location privacy of the system and remain undetected to defenders in the network. In order to reach the final simulation of the model, several underlying models had to be developed around the interactions of defender and attacker vehicles. The evaluation of this model shows that there are significant impacts that internal attacker vehicles can have on location privacy within mix zones. From the created simulations, the results show that having one to five optimal attackers shows a decrease of 0.6%-2.6% on the location privacy of the network and a 12% decrease in potential location privacy in a mix zone where an attacker defects in a 50-node network. The industry needs to consider implementing defenses based on this particular attack surface discussed.
32

High-Speed Mobile Networks for Modern Farming and Agricultural Systems

Najar, Santos 01 June 2014 (has links) (PDF)
ABSTRACT High-Speed Mobile Networks for Modern Farming and Agricultural Systems J.Santos Najar-Ramirez High-speed mobile networks are necessary for agriculture to inventory individual plant health, maximize yield and minimize the resources applied. More specifically, real-time information on individual plant status is critical to decisions regarding the management of resources reserved and expended. This necessity can be met by the availability of environmental sensors (such as humidity, temperature, and pH) whose data is kept on storage servers connected to static and mobile local area networks. These static and mobile local area networks are connected to cellular, core and satellite networks. For instance, agricultural experts remotely working on vast acreage farms from business offices or while traveling can easily connect their notebook computers and other portable devices to these networks in order to check farm status, send email, read industry news or arrange a visit to neighbor farms or suppliers. Today, several mobile phone companies offer broadband service with 2Mbps downlink in rural and dense urban areas, however, they do not typically exist in farm areas. Although these networks (such as 802.11ac/n, 3G, 4G, etc) are significant achievements, they do not meet the projected needs of the agricultural industry. The present use model of high-speed networks for email and multimedia content, together with agriculture’s expected intensive use of real-time plant and environmental condition monitoring, with statistics/plots and real-time high resolution video, necessitates a highly integrated and highly available networked system. For agricultural experts, attentive to market needs, seamless high-speed wireless communication ‘anywhere, anytime at any speed’ is critical to enhancing their productivity and crop yields.
33

HTTP 1.2: Distributed HTTP for Load Balancing Server Systems

O'Daniel, Graham M 01 June 2010 (has links) (PDF)
Content hosted on the Internet must appear robust and reliable to clients relying on such content. As more clients come to rely on content from a source, that source can be subjected to high levels of load. There are a number of solutions, collectively called load balancers, which try to solve the load problem through various means. All of these solutions are workarounds for dealing with problems inherent in the medium by which content is served thereby limiting their effectiveness. HTTP, or Hypertext Transport Protocol, is the dominant mechanism behind hosting content on the Internet through websites. The entirety of the Internet has changed drastically over its history, with the invention of new protocols, distribution methods, and technological improvements. However, HTTP has undergone only three versions since its inception in 1991, and all three versions serve content as a text stream that cannot be interrupted to allow for load balancing decisions. We propose a solution that takes existing portions of HTTP, augments them, and includes some new features in order to increase usability and management of serving content over the Internet by allowing redirection of content in-stream. This in-stream redirection introduces a new step into the client-server connection where servers can make decisions while continuing to serve content to the client. Load balancing methods can then use the new version of HTTP to make better decisions when applied to multi-server systems making load balancing more robust, with more control over the client-server interaction.
34

Protecting Controllers against Denial-of-Service Attacks in Software-Defined Networks

Li, Jingrui 07 November 2016 (has links)
Connection setup in software-defined networks (SDN) requires considerable amounts of processing, communication, and memory resources. Attackers can target SDN controllers defense mechanism based on a proof-of-work protocol. This thesis proposes a new protocol to protect controllers against such attacks, shows implementation of the system and analyze the its performance. The key characteristics of this protocol, namely its one-way operation, its requirement for freshness in proofs of work, its adjustable difficulty, its ability to work withmultiple network providers, and its use of existing TCP/IP header fields, ensure that this approach can be used in practice.
35

Digital Equalization of Fiber-Optic Transmission System Impairments

Luo, Ting 10 1900 (has links)
<p>In the past half century, numerous improvements have been achieved to make fiber-optic communication systems overweigh other traditional transmission systems such as electrical coaxial systems in many applications. However, the physical features including fiber losses, chromatic dispersion, polarization mode dispersion, laser phase noise, and nonlinear effect still post a huge obstruction in fiber-optic communication system. In the past two decades, along with the evolution of digital signal processing system, digital approach to compensate these effects become a more simple and inexpensive solution.</p> <p>In this thesis, we discuss digital equalization techniques to mitigate the fiber-optic transmission impairments. We explain the methodology in our implementation of this simulation tool. Several major parts of such digital compensation scheme, such as laser phase noise estimator, fixed chromatic dispersion compensator, and adaptive equalizer, are discussed. Two different types of adaptive equalizer algorithm are also compared and discussed. Our results show that the digital compensation scheme using least mean square (LMS) algorithm can perfectly compensate all linear distortion effects, and laser phase noise compensator is optional in this scheme. Our result also shows that the digital compensation scheme using constant modulus algorithm (CMA) has about 3~4db power penalty compare to LMS algorithm. CMA algorithm has its advantage that it is capable of blind detection and self-recovery, but the laser phase noise compensator is not optional in this scheme. A digital compensation scheme which combines CMA and LMS algorithm would be a perfect receiver scheme for future work.</p> / Master of Applied Science (MASc)
36

Fog Computing with Go: A Comparative Study

Butterfield, Ellis H 01 January 2016 (has links)
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
37

Measuring Effectiveness of Address Schemes for AS-level Graphs

Zhuang, Yinfang 01 January 2012 (has links)
This dissertation presents measures of efficiency and locality for Internet addressing schemes. Historically speaking, many issues, faced by the Internet, have been solved just in time, to make the Internet just work~\cite{justWork}. Consensus, however, has been reached that today's Internet routing and addressing system is facing serious scaling problems: multi-homing which causes finer granularity of routing policies and finer control to realize various traffic engineering requirements, an increased demand for provider-independent prefix allocations which injects unaggregatable prefixes into the Default Free Zone (DFZ) routing table, and ever-increasing Internet user population and mobile edge devices. As a result, the DFZ routing table is again growing at an exponential rate. Hierarchical, topology-based addressing has long been considered crucial to routing and forwarding scalability. Recently, however, a number of research efforts are considering alternatives to this traditional approach. With the goal of informing such research, we investigated the efficiency of address assignment in the existing (IPv4) Internet. In particular, we ask the question: ``how can we measure the locality of an address scheme given an input AS-level graph?'' To do so, we first define a notion of efficiency or locality based on the average number of bit-hops required to advertize all prefixes in the Internet. In order to quantify how far from ``optimal" the current Internet is, we assign prefixes to ASes ``from scratch" in a manner that preserves observed semantics, using three increasingly strict definitions of equivalence. Next we propose another metric that in some sense quantifies the ``efficiency" of the labeling and is independent of forwarding/routing mechanisms. We validate the effectiveness of the metric by applying it to a series of address schemes with increasing randomness given an input AS-level graph. After that we apply the metric to the current Internet address scheme across years and compare the results with those of compact routing schemes.
38

Understanding Home Networks with Lightweight Privacy-Preserving Passive Measurement

Zhou, Xuzi 01 January 2016 (has links)
Homes are involved in a significant fraction of Internet traffic. However, meaningful and comprehensive information on the structure and use of home networks is still hard to obtain. The two main challenges in collecting such information are the lack of measurement infrastructure in the home network environment and individuals’ concerns about information privacy. To tackle these challenges, the dissertation introduces Home Network Flow Logger (HNFL) to bring lightweight privacy-preserving passive measurement to home networks. The core of HNFL is a Linux kernel module that runs on resource-constrained commodity home routers to collect network traffic data from raw packets. Unlike prior passive measurement tools, HNFL is shown to work without harming either data accuracy or router performance. This dissertation also includes a months-long field study to collect passive measurement data from home network gateways where network traffic is not mixed by NAT (Network Address Translation) in a non-intrusive way. The comprehensive data collected from over fifty households are analyzed to learn the characteristics of home networks such as number and distribution of connected devices, traffic distribution among internal devices, network availability, downlink/uplink bandwidth, data usage patterns, and application traffic distribution.
39

Application of Huffman Data Compression Algorithm in Hashing Computation

Devulapalli Venkata,, Lakshmi Narasimha 01 April 2018 (has links)
Cryptography is the art of protecting information by encrypting the original message into an unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of the text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by the hash function are called as the message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing [Thomsen and Knudsen, 2005]. The purpose of this study is to apply Huffman data compression algorithm to the SHA-1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [Huffman, 1952]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
40

AUTOMATED NETWORK SECURITY WITH EXCEPTIONS USING SDN

Rivera Polanco, Sergio A. 01 January 2019 (has links)
Campus networks have recently experienced a proliferation of devices ranging from personal use devices (e.g. smartphones, laptops, tablets), to special-purpose network equipment (e.g. firewalls, network address translation boxes, network caches, load balancers, virtual private network servers, and authentication servers), as well as special-purpose systems (badge readers, IP phones, cameras, location trackers, etc.). To establish directives and regulations regarding the ways in which these heterogeneous systems are allowed to interact with each other and the network infrastructure, organizations typically appoint policy writing committees (PWCs) to create acceptable use policy (AUP) documents describing the rules and behavioral guidelines that all campus network interactions must abide by. While users are the audience for AUP documents produced by an organization's PWC, network administrators are the responsible party enforcing the contents of such policies using low-level CLI instructions and configuration files that are typically difficult to understand and are almost impossible to show that they do, in fact, enforce the AUPs. In other words, mapping the contents of imprecise unstructured sentences into technical configurations is a challenging task that relies on the interpretation and expertise of the network operator carrying out the policy enforcement. Moreover, there are multiple places where policy enforcement can take place. For example, policies governing servers (e.g., web, mail, and file servers) are often encoded into the server's configuration files. However, from a security perspective, conflating policy enforcement with server configuration is a dangerous practice because minor server misconfigurations could open up avenues for security exploits. On the other hand, policies that are enforced in the network tend to rarely change over time and are often based on one-size-fits-all policies that can severely limit the fast-paced dynamics of emerging research workflows found in campus networks. This dissertation addresses the above problems by leveraging recent advances in Software-Defined Networking (SDN) to support systems that enable novel in-network approaches developed to support an organization's network security policies. Namely, we introduce PoLanCO, a human-readable yet technically-precise policy language that serves as a middle-ground between the imprecise statements found in AUPs and the technical low-level mechanisms used to implement them. Real-world examples show that PoLanCO is capable of implementing a wide range of policies found in campus networks. In addition, we also present the concept of Network Security Caps, an enforcement layer that separates server/device functionality from policy enforcement. A Network Security Cap intercepts packets coming from, and going to, servers and ensures policy compliance before allowing network devices to process packets using the traditional forwarding mechanisms. Lastly, we propose the on-demand security exceptions model to cope with the dynamics of emerging research workflows that are not suited for a one-size-fits-all security approach. In the proposed model, network users and providers establish trust relationships that can be used to temporarily bypass the policy compliance checks applied to general-purpose traffic -- typically by network appliances that perform Deep Packet Inspection, thereby creating network bottlenecks. We describe the components of a prototype exception system as well as experiments showing that through short-lived exceptions researchers can realize significant improvements for their special-purpose traffic.

Page generated in 0.1961 seconds