Spelling suggestions: "subject:"computer network protocoldesign"" "subject:"computer network protocols.in""
1 |
Protocol design for dynamic Delaunay triangulationLee, Dong-young, 1973- 28 September 2012 (has links)
Delaunay triangulation (DT) is a useful geometric structure for net-working applications. We define a distributed DT and present a necessary and sufficient condition for a distributed DT to be correct. This condition is used as a guide for protocol design. We investigate the design of join, leave, failure, and maintenance protocols for a set of nodes in d-dimension (d > 1) to construct and maintain a distributed DT in a dynamic environment. The join, leave, and failure protocols in the suite are proved to be correct for a single join, leave, and failure, respectively. For a system under churn, it is impossible to maintain a correct distributed DT continually. We define an accuracy metric such that accuracy is 100% if and only if the distributed DT is correct. The suite also includes a maintenance protocol designed to recover from incorrect system states and to improve accuracy. In designing the protocols, we make use of two novel observations to substantially improve protocol efficiency. First, in the neighbor discovery process of a node, many replies to the node's queries contain redundant information. Second, the use of a new failure protocol that employs a proactive approach to recovery is better than the reactive approaches used in prior work. Experimental results show that our new suite of protocols maintains high accuracy for systems under churn and each system converges to 100% accuracy after churning stopped. They are much more efficient than protocols in prior work. To illustrate the usefulness of distributed DT for networking applications, we also present several application protocols including greedy routing, finding a closest existing node, clustering, broadcast, and geocast. Bose and Morin proved in 2004 that greedy routing always succeeds to find the destination node on a DT. We prove that greedy routing always finds a closest existing node to a given point, and our broadcast and geocast protocols always deliver a message to every target node. Our broadcast and geocast protocols are also efficient in the sense that very few target nodes receive duplicate messages, and non-target nodes receive no message. Performance characteristics of greedy routing, broadcast, and geocast are investigated using simulation experiments. We also investigate the impact of inaccurate coordinates on the performance of greedy routing, broadcast, and geocast. / text
|
2 |
Hop integrity: a defense against denial-of-service attacksHuang, Chin-Tser 28 August 2008 (has links)
Not available / text
|
3 |
Scalable real-time architectures and hardware support for high-speed QoS packet schedulersKrishnamurthy, Rajaram B. January 2003 (has links)
No description available.
|
4 |
Protocol design for scalable and reliable group rekeyingZhang, Xincheng 28 August 2008 (has links)
Not available / text
|
5 |
Design and analysis of self-stabilizing sensor network protocolsChoi, Young-ri 28 August 2008 (has links)
A sensor is a battery-operated small computer with an antenna and a sensing board that can sense magnetism, sound, heat, etc. Sensors in a network communicate and cooperate with other sensors to perform given tasks. A sensor network is exposed to various dynamic factors and faults, such as topology changes, energy saving features, unreliable communication, and hardware/software failures. Thus, protocols in this sensor network should be able to adapt to dynamic factors and recover from faults. In this dissertation, we focus on designing and analyzing a class of sensor network protocols, called self-stabilizing protocols. A self-stabilizing protocol is guaranteed to return to a state where it performs its intended function correctly, when some dynamic factors or faults corrupt the state of the protocol arbitrarily. Therefore, in order to make a sensor network resilient to dynamic factors and faults, each protocol in the sensor network should be self-stabilizing. We first develop a state-based model that can be used to formally specify sensor network protocols. This model accommodates several unique characteristics of sensor networks, such as unavoidable local broadcast, probabilistic message transmission, asymmetric communication, message collision, and timeout actions and randomization steps. Second, we present analysis methods for verifying and analyzing the correctness and self-stabilization properties of sensor network protocols specified in this model. Third, using the state-based model and analysis methods, we design three self-stabilizing sensor network protocols, prove their self-stabilization properties, and estimate their performance. These three self-stabilizing protocols are a sentry-sleeper protocol that elects a sentry from a group of sensors at the beginning of each time period, a logical grid routing protocol that builds a routing tree whose root is the base station, and a family of flood sequencing protocols that distinguish between fresh and redundant flood messages using sequence numbers. / text
|
6 |
Design and evaluation of MAC protocols for hybrid fiber/coaxial systemsSala, Dolors 05 1900 (has links)
No description available.
|
7 |
An Automata-Theoretic Approach to Hardware/Software Co-verificationLi, Juncao 01 January 2010 (has links)
Hardware/Software (HW/SW) interfaces are pervasive in computer systems. However, many HW/SW interface implementations are unreliable due to their intrinsically complicated nature. In industrial settings, there are three major challenges to improving reliability. First, as there is no systematic framework for HW/SW interface specifications, interface protocols cannot be precisely conveyed to engineers. Second, as there is no unifying formal model for representing the implementation semantics of HW/SW interfaces accurately, some critical properties cannot be formally verified on HW/SW interface implementations. Finally, few automatic tools exist to help engineers in HW/SW interface development. In this dissertation, we present an automata-theoretic approach to HW/SW co-verification that addresses these challenges. We designed a co-specification framework to formally specify HW/SW interface protocols; we synthesized a hybrid Büchi Automaton Pushdown System, namely Büchi Pushdown System (BPDS), as the unifying formal model for HW/SW interfaces; and we created a co-verification tool, CoVer that implements our model checking algorithms and realizes our reduction algorithms for BPDS. The application of our approach to the Windows device/driver framework has resulted in the detection of fifteen specification issues. Furthermore, utilizing CoVer, we discovered twelve real bugs in five drivers. These non-trivial findings have demonstrated the significance of our approach in industrial applications.
|
Page generated in 0.1017 seconds