• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1104
  • 379
  • 210
  • 133
  • 95
  • 75
  • 37
  • 19
  • 18
  • 18
  • 15
  • 15
  • 15
  • 15
  • 12
  • Tagged with
  • 2451
  • 610
  • 607
  • 376
  • 324
  • 321
  • 267
  • 257
  • 252
  • 234
  • 226
  • 215
  • 210
  • 204
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Measuring and Understanding TTL Violations in DNS Resolvers

Bhowmick, Protick 02 January 2024 (has links)
The Domain Name System (DNS) is a scalable-distributed caching architecture where each DNS records are cached around several DNS servers distributed globally. DNS records include a time-to-live (TTL) value that dictates how long the record can be stored before it's evicted from the cache. TTL holds significant importance in aspects of DNS security, such as determining the caching period for DNSSEC-signed responses, as well as performance, like the responsiveness of CDN-managed domains. On a high level, TTL is crucial for ensuring efficient caching, load distribution, and network security in Domain Name System. Setting appropriate TTL values is a key aspect of DNS administration to ensure the reliable and efficient functioning of the Domain Name System. Therefore, it is crucial to measure how TTL violations occur in resolvers. But, assessing how DNS resolvers worldwide handle TTL is not easy and typically requires access to multiple nodes distributed globally. In this work, we introduce a novel methodology for measuring TTL violations in DNS resolvers leveraging a residential proxy service called Brightdata, enabling us to evaluate more than 27,000 resolvers across 9,500 Autonomous Systems (ASes). We found that 8.74% arbitrarily extends TTL among 8,524 resolvers that had atleast five distinct exit nodes. Additionally, we also find that the DNSSEC standard is being disregarded by 44.1% of DNSSEC-validating resolvers, as they continue to provide DNSSEC-signed responses even after the RRSIGs have expired. / Master of Science / The Domain Name System (DNS) works as a global phonebook for the internet, helping your computer find websites by translating human-readable names into numerical IP addresses. This system uses a smart caching system spread across various servers worldwide to store DNS records. Each record comes with a time-to-live (TTL) value, essentially a timer that decides how long the information should stay in the cache before being replaced. TTL is crucial for both security and performance in the DNS world. It plays a role in securing responses and determines the responsiveness of load balancing schemes employed at Content Delivery Networks (CDNs). In simple terms, TTL ensures efficient caching, even network load, and overall security in the Domain Name System. For DNS to work smoothly, it's important to set the right TTL values and the resolvers to strictly honor the TTL. However, figuring out how well DNS servers follow these rules globally is challenging. In this study, we introduce a new way to measure TTL violations in DNS servers using a proxy service called Brightdata. This allows us to check over 27,000 servers across 9,500 networks. Our findings reveal that 8.74% of these servers extend TTL arbitrarily. Additionally, we discovered that 44.1% of servers that should be following a security standard (DNSSEC) are not doing so properly, providing signed responses even after they are supposed to expire. This research sheds light on how DNS servers around the world extend TTL and the potential performance and security risks involved.
202

Lunar: A User-Level Stack Library for Network Emulation

Knestrick, Christopher C. 02 March 2004 (has links)
The primary issue with developing new networking protocols is testing how the protocol will behave when deployed on a large scale; of particular interest is how it will interact with existing protocols. Testing a protocol using a network simulator has drawbacks. First, the protocol must be written for the simulator and then rewritten for actual deployment. Aside from the additional work, this allows for software bugs to be introduced between testing and implementation. More importantly, there are correctness issues. Since both the new and existing protocols must be specially written for the simulator, and not actual real-world implementations, the question remains if the behavior observed and, specifically, the interactions between the protocols are valid. Direct code execution environments solve the correctness problem, but there is the loss of control that a simulator provides. Our solution is to create an environment that allows direct code execution to occur on top of a network simulator. This thesis presents the primary component of that solution: Lunar (Linux User-level Network Architecture), a user-level library that is created from the network stack portion of the Linux operating system. This allows real-world applications to link against a simulator, with Lunar serving as the bridge. For this work, an implementation of Lunar was constructed using the 2.4.3 version of the Linux kernel. Verification testing was performed to demonstrate correct functioning of the library using both TCP (including TCP with loss) and UDP. Performance testing was done to measure the overhead that Lunar adds to a running application. Overhead was measured as the percent increase in the runtime of an application with Lunar as compared to the application running without it, and ranged from approximately 2% (running over 100 Mbps switched Ethernet) to approximately 39% (1 Gbps Myrinet). / Master of Science
203

The emerging protocol: An integrated reliable and effective regime

Pearson, Graham S., Dando, Malcolm January 1999 (has links)
Yes
204

The BTWC Protocol: Proposed Complete Text for an Integrated Regime

Pearson, Graham S., Sims, N.A., Dando, Malcolm, Kenyon, I.R. January 2000 (has links)
Yes
205

The BTWC Protocol: Revised Proposed Complete Text for an Integrated Regime

Pearson, Graham S., Sims, N.A., Dando, Malcolm, Kenyon, I.R. January 2000 (has links)
Yes
206

The BTWC Protocol: Proposed Complete Text for an Integrated Regime

Pearson, Graham S., Sims, N.A., Dando, Malcolm, Kenyon, I.R. January 2000 (has links)
Yes
207

The Composite Protocol Text: An Effective Strengthening of the Biological and Toxin Weapons Convention

Pearson, Graham S., Dando, Malcolm, Sims, N.A. January 2001 (has links)
Yes
208

USING LABVIEW TO DESIGN A FAULT-TOLERANT LINK ESTABLISHMENT PROTOCOL

Horan, Stephen, Deivasigamani, Giriprassad 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / The design of a protocol for a satellite cluster link establishment and management that accounts for link corruption, node failures, and node re-establishment is presented in this paper. This protocol will need to manage the traffic flow between nodes in the satellite cluster, adjust routing tables due to node motion, allow for sub-networks in the cluster, and similar activities. This protocol development is in its initial stages and we will describe how we use the LabVIEW Sate Diagram tool kit to generate the code to design a state machine representing the protocol for the establishment of inter-satellite communications links.
209

Vertical handoff in heterogeneous wireless networks with mSCTP

Tsang, Cheuk-kan, Ken., 曾卓勤. January 2008 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
210

Measuring Effectiveness of Address Schemes for AS-level Graphs

Zhuang, Yinfang 01 January 2012 (has links)
This dissertation presents measures of efficiency and locality for Internet addressing schemes. Historically speaking, many issues, faced by the Internet, have been solved just in time, to make the Internet just work~\cite{justWork}. Consensus, however, has been reached that today's Internet routing and addressing system is facing serious scaling problems: multi-homing which causes finer granularity of routing policies and finer control to realize various traffic engineering requirements, an increased demand for provider-independent prefix allocations which injects unaggregatable prefixes into the Default Free Zone (DFZ) routing table, and ever-increasing Internet user population and mobile edge devices. As a result, the DFZ routing table is again growing at an exponential rate. Hierarchical, topology-based addressing has long been considered crucial to routing and forwarding scalability. Recently, however, a number of research efforts are considering alternatives to this traditional approach. With the goal of informing such research, we investigated the efficiency of address assignment in the existing (IPv4) Internet. In particular, we ask the question: ``how can we measure the locality of an address scheme given an input AS-level graph?'' To do so, we first define a notion of efficiency or locality based on the average number of bit-hops required to advertize all prefixes in the Internet. In order to quantify how far from ``optimal" the current Internet is, we assign prefixes to ASes ``from scratch" in a manner that preserves observed semantics, using three increasingly strict definitions of equivalence. Next we propose another metric that in some sense quantifies the ``efficiency" of the labeling and is independent of forwarding/routing mechanisms. We validate the effectiveness of the metric by applying it to a series of address schemes with increasing randomness given an input AS-level graph. After that we apply the metric to the current Internet address scheme across years and compare the results with those of compact routing schemes.

Page generated in 6.837 seconds