Spelling suggestions: "subject:"complexnetworks"" "subject:"computernetworks""
11 |
Efficient Range and Join Query Processing in Massively Distributed Peer-to-Peer NetworksWang, Qiang January 2008 (has links)
Peer-to-peer (P2P) has become a modern distributed computing architecture that supports massively large-scale data management and query processing. Complex query operators such as range operator and
join operator are needed by various distributed applications, including content distribution, locality-aware services, computing resource sharing, and many others.
This dissertation tackles a number of problems related to range and join query processing in P2P systems: fault-tolerant range query processing under structured P2P architecture, distributed range caching under unstructured P2P architecture, and integration of heterogeneous data under unstructured P2P architecture. To support
fault-tolerant range query processing so as to provide strong performance guarantees in the presence of network churn, effective
replication schemes are developed at either the overlay network level or the query processing level. To facilitate range query
processing, a prefetch-based caching approach is proposed to eliminate the performance bottlenecks incurred by those data items
that are not well cached in the network. Finally, a purely decentralized partition-based join query operator is devised to realize bandwidth-efficient join query processing under unstructured P2P architecture.
Theoretical analysis and experimental simulations demonstrate the effectiveness of the proposed approaches.
|
12 |
Efficient Range and Join Query Processing in Massively Distributed Peer-to-Peer NetworksWang, Qiang January 2008 (has links)
Peer-to-peer (P2P) has become a modern distributed computing architecture that supports massively large-scale data management and query processing. Complex query operators such as range operator and
join operator are needed by various distributed applications, including content distribution, locality-aware services, computing resource sharing, and many others.
This dissertation tackles a number of problems related to range and join query processing in P2P systems: fault-tolerant range query processing under structured P2P architecture, distributed range caching under unstructured P2P architecture, and integration of heterogeneous data under unstructured P2P architecture. To support
fault-tolerant range query processing so as to provide strong performance guarantees in the presence of network churn, effective
replication schemes are developed at either the overlay network level or the query processing level. To facilitate range query
processing, a prefetch-based caching approach is proposed to eliminate the performance bottlenecks incurred by those data items
that are not well cached in the network. Finally, a purely decentralized partition-based join query operator is devised to realize bandwidth-efficient join query processing under unstructured P2P architecture.
Theoretical analysis and experimental simulations demonstrate the effectiveness of the proposed approaches.
|
13 |
Resource-Efficient Communication in the Presence of AdversariesYoung, Maxwell January 2011 (has links)
This dissertation presents algorithms for achieving communication in the presence of adversarial attacks in large, decentralized, resource-constrained networks. We consider abstract single-hop communication settings where a set of senders 𝙎 wishes to directly communicate with a set of receivers 𝙍. These results are then extended to provide resource-efficient, multi-hop communication in wireless sensor networks (WSNs), where energy is critically scarce, and peer-to-peer (P2P) networks, where bandwidth and computational power are limited. Our algorithms are provably correct in the face of attacks by a computationally bounded adversary who seeks to disrupt communication between correct participants.
The first major result in this dissertation addresses a general scenario involving single-hop communication in a time-slotted network where a single sender in 𝙎 wishes to transmit a message 𝘮 to a single receiver in 𝙍. The two players share a communication channel; however, there exists an adversary who aims to prevent the transmission of 𝘮 by periodically blocking this channel. There are costs to send, receive or block 𝘮 on the channel, and we ask: How much do the two players need to spend relative to the adversary in order to guarantee transmission of the message?
This problem abstracts many types of conflict in information networks, and the associated costs represent an expenditure of network resources. We show that it is significantly more costly for the adversary to block 𝘮 than for the two players to achieve communication. Specifically, if the cost to send, receive and block 𝘮 in a slot are fixed constants, and the adversary spends a total of 𝘉 slots to try to block the message, then both the sender and receiver must be active in only O(𝘉ᵠ⁻¹ + 1) slots in expectation to transmit 𝘮, where φ = (1+ √5)/2 is the golden ratio. Surprisingly, this result holds even if (1) the value of 𝘉 is unknown to either player; (2) the adversary knows the algorithms of both players, but not their random bits; and (3) the adversary is able to launch attacks using total knowledge of past actions of both players. Finally, these results are applied to two concrete problems. First, we consider jamming attacks in WSNs and address the fundamental task of propagating 𝘮 from a single device to all others in a WSN in the presence of faults; this is the problem of reliable broadcast. Second, we examine how our algorithms can mitigate application-level distributed denial-of-service attacks in wired client-server scenarios.
The second major result deals with a single-hop communication problem where now 𝙎 consists of multiple senders and there is still a single receiver who wishes to obtain a message 𝘮. However, many of the senders (strictly less than half) can be faulty, failing to send 𝘮 or sending incorrect messages. While the majority of the senders possess 𝘮, rather than listening to all of 𝙎 and majority filtering on the received data, we desire an algorithm that allows the single receiver to decide on 𝘮 in a more efficient manner. To investigate this scenario, we define and devise algorithms for a new data streaming problem called the Bad Santa problem which models the selection dilemma faced by the receiver.
With our results for the Bad Santa problem, we consider the problem of energy-efficient reliable broadcast. All previous results on reliable broadcast require devices to spend significant time in the energy-expensive receiving state which is a critical problem in WSNs where devices are typically battery powered. In a popular WSN model, we give a reliable broadcast protocol that achieves optimal fault tolerance (i.e., tolerates the maximum number of faults in this WSN model)
and improves over previous results by achieving an expected quadratic decrease in the cost to each device. For the case where the number of faults is within a (1-∊)-factor of the optimal fault tolerance, for any constant ∊>0, we give a reliable broadcast protocol that improves further by achieving an expected (roughly) exponential decrease in the cost to each device.
The third and final major result of this dissertation addresses single-hop communication where 𝙎 and 𝙍 both consist of multiple peers that need to communicate in an attack-resistant P2P network. There are several analytical results on P2P networks that can tolerate an adversary who controls a large number of peers and uses them to disrupt network functionality. Unfortunately, in such systems, operations such as data retrieval and message sending incur significant communication costs. Here, we employ cryptographic techniques to define two protocols both of which are more efficient than existing solutions. For a network of 𝘯 peers, our first protocol is deterministic with O(log²𝘯) message complexity and our second protocol is randomized with expected O(log 𝘯) message complexity; both improve over all previous results. The hidden constants and setup costs for our protocols are small and no trusted third party is required. Finally, we present an analysis showing that our protocols are practical for deployment under significant churn and adversarial behaviour.
|
14 |
Quality-consciousness in Large-scale Content Distribution in the InternetGupta, Minaxi 23 July 2004 (has links)
Content distribution is the primary
function of the Internet today.
Technologies like multicast and
peer-to-peer networks hold the potential
to serve content to large populations in
a scalable manner. While multicast
provides an efficient transport
mechanism for one-to-many and
many-to-many delivery of data in an
Internet environment, the peer-to-peer
networks allow scalable content location
and retrieval among large groups of
users in the Internet.
Incorporating quality-consciousness in
these technologies is necessary to
enhance the overall experience of
clients. This dissertation focuses on
the architectures and mechanisms to
enhance multicast and peer-to-peer
content distribution through
quality-consciousness. In particular,
the following aspects of
quality-consciousness are addressed: 1)
client latency, 2) service
differentiation, and 3) content quality.
Data analysis shows that the existing
multicast scheduling algorithms behave
unfairly when the access conditions for
the popular files changes. They favor
the popular files while penalizing the
files whose access conditions have not
changed. To maintain the client latency
for all files under dynamic access
conditions we develop a novel multicast
scheduling algorithm that requires no
change in server provisioning.
Service differentiation is a desirable
functionality for both multicast and
peer-to-peer networks. For multicast,
we design a scalable and low overhead
service differentiation architecture.
For peer-to-peer networks, we focus on a
protocol to provide different levels of
service to peers based on their
contributions in the system.
The ability to associate reliable
reputations with peers in a peer-to-peer
network is a useful feature of these
networks. Reliable reputations can help
establish trust in these networks and
hence improve content quality. They can
also be used as a substrate for a
service differentiation scheme for these
networks. This dissertation develops
two methods of tracking peer reputations
with varying degrees of reliability and
overheads.
|
15 |
Understanding Churn in Decentralized Peer-to-Peer NetworksYao, Zhongmei 2009 August 1900 (has links)
This dissertation presents a novel modeling framework for understanding the dynamics
of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure)
and designing systems more resilient against node failure. The proposed models are
applicable to general distributed systems under a variety of conditions on graph construction
and user lifetimes.
The foundation of this work is a new churn model that describes user arrival and
departure as a superposition of many periodic (renewal) processes. It not only allows
general (non-exponential) user lifetime distributions, but also captures heterogeneous
behavior of peers. We utilize this model to analyze link dynamics and the ability
of the system to stay connected under churn. Our results offers exact computation
of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution,
including heavy-tailed cases found in real systems. We also propose an
age-proportional random-walk algorithm for creating links in unstructured P2P networks
that achieves zero isolation probability as system size becomes infinite. We
additionally obtain many insightful results on the transient distribution of in-degree,
edge arrival process, system size, and lifetimes of live users as simple functions of the
aggregate lifetime distribution.
The second half of this work studies churn in structured P2P networks that are
usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine
link lifetimes and routing performance of the system, while the latter are built for
ensuring DHT consistency and connectivity. Our first result in this area proves that
robustness of DHTs is mainly determined by zone size of selected neighbors, which
leads us to propose a min-zone algorithm that significantly reduces link churn in
DHTs. Our second result uses the Chen-Stein method to understand concurrent
failures among strongly dependent successor sets of many DHTs and finds an optimal
stabilization strategy for keeping Chord connected under churn.
|
16 |
Robust and Scalable Sampling Algorithms for Network MeasurementWang, Xiaoming 2009 August 1900 (has links)
Recent growth of the Internet in both scale and complexity has imposed a number of difficult challenges on existing measurement techniques and approaches, which
are essential for both network management and many ongoing research projects. For
any measurement algorithm, achieving both accuracy and scalability is very challenging given hard resource constraints (e.g., bandwidth, delay, physical memory, and
CPU speed). My dissertation research tackles this problem by first proposing a novel
mechanism called residual sampling, which intentionally introduces a predetermined
amount of bias into the measurement process. We show that such biased sampling
can be extremely scalable; moreover, we develop residual estimation algorithms that
can unbiasedly recover the original information from the sampled data. Utilizing
these results, we further develop two versions of the residual sampling mechanism:
a continuous version for characterizing the user lifetime distribution in large-scale
peer-to-peer networks and a discrete version for monitoring flow statistics (including
per-flow counts and the flow size distribution) in high-speed Internet routers. For the
former application in P2P networks, this work presents two methods: ResIDual-based
Estimator (RIDE), which takes single-point snapshots of the system and assumes
systems with stationary arrivals, and Uniform RIDE (U-RIDE), which takes multiple snapshots and adapts to systems with arbitrary (including non-stationary) arrival
processes. For the latter application in traffic monitoring, we introduce Discrete
RIDE (D-RIDE), which allows one to sample each flow with a geometric random variable. Our numerous simulations and experiments with P2P networks and real
Internet traces confirm that these algorithms are able to make accurate estimation
about the monitored metrics and simultaneously meet the requirements of hard resource constraints. These results show that residual sampling indeed provides an ideal
solution to balancing between accuracy and scalability.
|
17 |
The Evaluation of Inquiry-based Learning with Incentive Mechanisms on Peer-to-Peer NetworksWu, Shih-neng 27 July 2004 (has links)
With rapid development of information technologies, especially the Internet technology, people can communicate more flexibly via various media, in which knowledge can be also shared. In gaining knowledge through the Internet, either digital content retrieval or inter-personal interaction, learning activities conducted on the Web are getting popular. This research has two main objectives. One is to develop incentive mechanisms to enhance the quantity and quality of information shared through peer-to-peer (P2P) networks. The other objective is to implement and evaluate the proposed mechanisms for inquiry-based learning on P2P networks.
The pricing-like incentive mechanism is embedded on each peer to determine the price to share a document, to issue a question, and respond to a question. Through experiments, this study evaluates the effects on mitigating the free-riding problems and exchanging information through the P2P network. The results show the effectiveness of the incentive mechanisms for inquiry-based learning on P2P networks.
|
18 |
The Study of Dynamic Team Formation in Peer-to-Peer NetworksChiang, Chi-hsun 27 July 2004 (has links)
Most of virtual communities are built on the client/server system. There are some limitations on the client/server system such as the maintenance cost and the personal attribute protection. The peer-to-peer system has some strengths to overcome the limitations of client/server system. Therefore, we are willing to export the virtual community on the peer-to-peer system.
There are two main team formation approaches in the current virtual community collaboration. Either one of these approaches alone has its limitations. In this study, we adopt the social network concept to design a team formation mechanism in order to overcome the limitations of current approaches. Besides, because of the natural of peer-to-peer system, the exchange of messages is sending and receiving on the network. The mechanism proposed in this research can also reduce the traffic cost of the team formation process. Furthermore, it maintains the fitness of members who are chosen in the same team.
|
19 |
Achieving Electronic Healthcare Record (ehr) Interoperability Across Healthcare Information SystemsKilic, Ozgur 01 June 2008 (has links) (PDF)
Providing an interoperability infrastructure for Electronic Healthcare Records (EHRs) is on the agenda of many national and regional eHealth initiatives. Two important integration profiles have been specified for this purpose: the " / IHE Cross-enterprise Document Sharing (XDS)" / and the " / IHE Cross Community Access (XCA)" / . XDS describes how to share EHRs in a community of healthcare enterprises and XCA describes how EHRs are shared across communities.
However, currently no solution addresses some of the important challenges of cross community exchange environments. The first challenge is scalability. If every community joining the network needs to connect to every other community, this solution will not scale. Furthermore, each community may use a different coding vocabulary for the same metadata attribute in which case the target community cannot interpret the query involving such an attribute. Another important challenge is that each community has a different patient identifier domain. Querying for the patient identifiers in another community using patient demographic data may create patient privacy concerns. Yet another challenge in cross community EHR access is the EHR interoperability since the communities may be using different EHR content standards.
|
20 |
Lifetime-Based Scheduling Algorithms for P2P Collaborative File DistributionLiu, Yun-Chi 06 August 2008 (has links)
Prior researches in P2P file sharing mostly focus on several topics such as the overlay topology, the content searching, the peer discovery, the sharing fairness, incentive mechanisms, except scheduling algorithms for peer-to-peer collaborative file distribution. The scheduling algorithm specifies how file pieces are distributed among peers. When a peer that has the rarest piece leaves, the other peers probably download the incomplete file in the network. Our algorithm is involved in the lifetime of peers in the P2P networks. We first use the distribution of peer¡¦s lifetime and the demand of each peer to decide which peers send which pieces that are rarities, and then also consider the distribution of peer¡¦s lifetime and the demand of each peer to decide which peers receive these pieces. Our goals are to maximize number of peers which have downloaded an entire file before it leaves, to increase the availability of different file pieces, and to minimize the transmission time of the latest completion. Lastly, we show the comparison of the performances of RPF, MDNF, Lifetime-based RPF, and Lifetime-based MDNF algorithms.
|
Page generated in 0.0379 seconds