• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1109
  • 379
  • 210
  • 133
  • 95
  • 75
  • 37
  • 19
  • 18
  • 18
  • 16
  • 15
  • 15
  • 15
  • 12
  • Tagged with
  • 2457
  • 610
  • 607
  • 376
  • 324
  • 321
  • 267
  • 257
  • 252
  • 234
  • 226
  • 217
  • 210
  • 204
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The BTWC Protocol Implementation: Practical Considerations

Pearson, Graham S. 10 1900 (has links)
Yes
192

The emerging protocol: A quantified evaluation of the regime

Pearson, Graham S. January 1999 (has links)
Yes
193

Preamble

Pearson, Graham S., Sims, N.A. January 2000 (has links)
Yes
194

The Effects of Sepsis Management Protocols on Time to Antibiotic Administration in the Emergency Department

Lorch, Margaret K 01 January 2018 (has links)
Sepsis is one of the leading causes of death in U.S. hospitals, resulting from organ dysfunction caused by an inappropriate inflammatory reaction to an infection. Timely treatment with empiric antibiotics in the emergency department is crucial to facilitate positive patient outcomes. The Surviving Sepsis Campaign (SSC) recommends initiating empiric antibiotic therapy within one hour of presentation to the emergency department. Some emergency departments have implemented sepsis management protocols to guide care and ensure timely treatment. The purpose of this study is to determine the effect of a formal sepsis protocol in the emergency department on the time to antibiotic administration. A literature review was conducted using CINAHL, Cochrane Database, Health Source: Nursing/Academic Edition, and MEDLINE. Results from one systematic review, eight quasi-experimental studies, and four quality improvement projects suggested that implementation of a sepsis management protocol in an emergency department may decrease the time to antibiotic administration. (< 10 = spell out) Eleven of the 13 articles reported decreased time to antibiotic administration by as much as 8-193 minutes compared to pre-protocol. One study met the SSC goal of 1 hour and reported a median administration time of 17 minutes. Time to antibiotics was influenced by protocols based on published sepsis guidelines, inclusion of antibiotic guidelines, nurse-initiated treatment, and education for emergency clinicians regarding sepsis management. Emergency departments should implement sepsis protocols adapted to their local institution to decrease time to antibiotic administration and reduce mortality of sepsis patients. Further research on how sepsis protocols affect antibiotic administration time is needed.
195

Global Synchronization of Asynchronous Computing Systems

Barnes, Richard Neil 14 December 2001 (has links)
The MSU ERC UltraScope system consists of a distributed computing system, custom PCI cards, GPS receivers, and a re-radiation system. The UltraScope system allows precision timestamping of events in a distributed application on a system where the CPU and PCI clocks are phase-locked. The goal of this research is to expand the UltraScope system, using software routines and minimal hardware modifications, to allow precision timestamping of events on an asynchronous distributed system. The timestamp process is similar to the Network Time Protocol (NTP) in that it uses a series of timestamps to improve precision. As expected, the precision is less accurate on an asynchronous system than on a synchronous system. Results show that the precision is improved using this sequence of timestamps, and the major error component is due to operating system delays. The errors associated with this timestamping process are characterized using a synchronous system as a baseline.
196

Secure Multi-party Authorization in Clouds

Lin, Wenjie 22 May 2015 (has links)
No description available.
197

Application Layer Multipoint Extension for the Session Initiation Protocol

Thorp, Brian J. 04 May 2005 (has links)
The Session Initiation Protocol (SIP) was first published in 1999, by the Internet Engineering Task Force (IETF), to be the standard for multimedia transfers. SIP is a peer-to-peer signaling protocol that is capable of initiating, modifying, and terminating media sessions. SIP utilizes existing Internet Protocols (IP) such as Domain Name Service (DNS) and the Session Description Protocol (SDP), allowing it to seamlessly integrate into existing IP networks. As SIP has matured and gained acceptance, its deficiencies when functioning as a multipoint communications protocol have become apparent. SIP currently supports two modes of operation referred to as conferencing and multicasting. Conferencing is the unicast transmission of session information between conference members. Multicasting uses IP multicast to distribute session information. This thesis proposes an extension for the Session Initiation Protocol that improves functionality for multipoint communications. When using conferencing, a SIP user-agent has limited information about the conference it is taking part in. This extension increases the awareness of a SIP node by providing it with complete conference membership information, the ability to detect neighboring node failures, and the ability to automatically repair conference partitions. Signaling for conferencing was defined and integrated into a standard SIP implementation where it was used to demonstrate the above capabilities. Using a prototype implementation, the additional functionality was shown to come at the cost of a modest increase in transaction message size and processing complexity. IP multicast has limited deployment in today's networks reducing the usability of this useful feature. Since IP multicast support is not guaranteed, the use of application layer multicast protocols is proposed to replace the use of IP multicast. An efficient means of negotiating an application layer protocol is proposed as well as the ability to provide the protocol with session information to begin operation. A ring protocol was defined and implemented using the proposed extension. Performance testing revealed that the application layer protocol had slightly higher processing complexity than conferencing, but on average had a smaller transaction message size. / Master of Science
198

Measuring and Understanding TTL Violations in DNS Resolvers

Bhowmick, Protick 02 January 2024 (has links)
The Domain Name System (DNS) is a scalable-distributed caching architecture where each DNS records are cached around several DNS servers distributed globally. DNS records include a time-to-live (TTL) value that dictates how long the record can be stored before it's evicted from the cache. TTL holds significant importance in aspects of DNS security, such as determining the caching period for DNSSEC-signed responses, as well as performance, like the responsiveness of CDN-managed domains. On a high level, TTL is crucial for ensuring efficient caching, load distribution, and network security in Domain Name System. Setting appropriate TTL values is a key aspect of DNS administration to ensure the reliable and efficient functioning of the Domain Name System. Therefore, it is crucial to measure how TTL violations occur in resolvers. But, assessing how DNS resolvers worldwide handle TTL is not easy and typically requires access to multiple nodes distributed globally. In this work, we introduce a novel methodology for measuring TTL violations in DNS resolvers leveraging a residential proxy service called Brightdata, enabling us to evaluate more than 27,000 resolvers across 9,500 Autonomous Systems (ASes). We found that 8.74% arbitrarily extends TTL among 8,524 resolvers that had atleast five distinct exit nodes. Additionally, we also find that the DNSSEC standard is being disregarded by 44.1% of DNSSEC-validating resolvers, as they continue to provide DNSSEC-signed responses even after the RRSIGs have expired. / Master of Science / The Domain Name System (DNS) works as a global phonebook for the internet, helping your computer find websites by translating human-readable names into numerical IP addresses. This system uses a smart caching system spread across various servers worldwide to store DNS records. Each record comes with a time-to-live (TTL) value, essentially a timer that decides how long the information should stay in the cache before being replaced. TTL is crucial for both security and performance in the DNS world. It plays a role in securing responses and determines the responsiveness of load balancing schemes employed at Content Delivery Networks (CDNs). In simple terms, TTL ensures efficient caching, even network load, and overall security in the Domain Name System. For DNS to work smoothly, it's important to set the right TTL values and the resolvers to strictly honor the TTL. However, figuring out how well DNS servers follow these rules globally is challenging. In this study, we introduce a new way to measure TTL violations in DNS servers using a proxy service called Brightdata. This allows us to check over 27,000 servers across 9,500 networks. Our findings reveal that 8.74% of these servers extend TTL arbitrarily. Additionally, we discovered that 44.1% of servers that should be following a security standard (DNSSEC) are not doing so properly, providing signed responses even after they are supposed to expire. This research sheds light on how DNS servers around the world extend TTL and the potential performance and security risks involved.
199

Lunar: A User-Level Stack Library for Network Emulation

Knestrick, Christopher C. 02 March 2004 (has links)
The primary issue with developing new networking protocols is testing how the protocol will behave when deployed on a large scale; of particular interest is how it will interact with existing protocols. Testing a protocol using a network simulator has drawbacks. First, the protocol must be written for the simulator and then rewritten for actual deployment. Aside from the additional work, this allows for software bugs to be introduced between testing and implementation. More importantly, there are correctness issues. Since both the new and existing protocols must be specially written for the simulator, and not actual real-world implementations, the question remains if the behavior observed and, specifically, the interactions between the protocols are valid. Direct code execution environments solve the correctness problem, but there is the loss of control that a simulator provides. Our solution is to create an environment that allows direct code execution to occur on top of a network simulator. This thesis presents the primary component of that solution: Lunar (Linux User-level Network Architecture), a user-level library that is created from the network stack portion of the Linux operating system. This allows real-world applications to link against a simulator, with Lunar serving as the bridge. For this work, an implementation of Lunar was constructed using the 2.4.3 version of the Linux kernel. Verification testing was performed to demonstrate correct functioning of the library using both TCP (including TCP with loss) and UDP. Performance testing was done to measure the overhead that Lunar adds to a running application. Overhead was measured as the percent increase in the runtime of an application with Lunar as compared to the application running without it, and ranged from approximately 2% (running over 100 Mbps switched Ethernet) to approximately 39% (1 Gbps Myrinet). / Master of Science
200

Remote Integrity Checking using Multiple PUF based Component Identifiers

Mandadi, Harsha 14 June 2017 (has links)
Modern Printed Circuit Boards (PCB) contain sophisticated and valuable electronic components, and this makes them a prime target for counterfeiting. In this thesis, we consider a method to test if a PCB is genuine. One high-level solution is to use a secret identifier of the board, together with a cryptographic authentication protocol. We describe a mechanism that authenticates all major components of PCB as part of attesting the PCB. Our authentication protocol constructs the fingerprint of PCB by extracting hardware fingerprint from the components on PCB and cryptographically combining the fingerprints. Fingerprints from each component on PCB are developed using Physical Unclonable Functions (PUF). In this thesis, we present a PUF based authentication protocol for remote integrity checking using multiple PUF component level identifiers. We address the design on 3 different abstraction levels. 1)Hardware Level, 2)Hardware Integration level, 3)Protocol level. On the hardware level, we propose an approach to develop PUF from flash memory component on the device. At the hardware Integration level, we discuss a hardware solution for implementing a trustworthy PUF based authentication. We present a prototype of the PUF based authentication protocol on an FPGA board via network sockets. / Master of Science / Electronic devices have become ubiquitous, from being used in day to day applications to device critical applications (defense, medical). These devices have valuable electronic components integrated on it. Because of its growing importance, they have attracted many counterfeiters. Counterfeiters replace a genuine component with a substandard component. In this thesis, we discuss a method to identify if an electronic device, a Printed Circuit Board in this case, is genuine. We present a solution to remotely verify authenticity of the board by extracting fingerprints from all the major components on the board. Fingerprints from each major component on the board are extracted using Physical Uncloanable Functions (PUF). These fingerprints are crypographically combined to develop an unique fingerprint for the board. Our design is addressed in 3 different abstraction levels 1) Hardware level 2) Hardware Integration level 3) Protocol level. In the Hardware level, we discuss an approach to extract fingerprints from flash memory component. In the Hardware Integration level, we discuss a hadware approach for trustworthy PUF based solution . In the Protocol level, we present a prototype of our design on FPGA using network sockets.

Page generated in 0.0561 seconds