• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 460
  • 91
  • 43
  • 34
  • 29
  • 28
  • 27
  • 24
  • 16
  • 13
  • 10
  • 9
  • 9
  • 8
  • 2
  • Tagged with
  • 857
  • 857
  • 317
  • 309
  • 197
  • 142
  • 142
  • 137
  • 137
  • 102
  • 87
  • 81
  • 80
  • 78
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Scheduling algorithms for data distribution in peer-to-peer collaborative file distribution networks

Chan, Siu-kei, Jonathan, 陳兆基 January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
92

Scalable content distribution in overlay networks

Kwan, Tin-man, Tony., 關天文. January 2007 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
93

Large-scale Peer-to-peer Streaming: Modeling, Measurements, and Optimizing Solutions

Wu, Chuan 26 February 2009 (has links)
Peer-to-peer streaming has emerged as a killer application in today's Internet, delivering a large variety of live multimedia content to millions of users at any given time with low server cost. Though successfully deployed, the efficiency and optimality of the current peer-to-peer streaming protocols are still less than satisfactory. In this thesis, we investigate optimizing solutions to enhance the performance of the state-of-the-art mesh-based peer-to-peer streaming systems, utilizing both theoretical performance modeling and extensive real-world measurements. First, we model peer-to-peer streaming applications in both the single-overlay and multi-overlay scenarios, based on the solid foundation of optimization and game theories. Using these models, we design efficient and fully decentralized solutions to achieve performance optimization in peer-to-peer streaming. Then, based on a large volume of live measurements from a commercial large-scale peer-to-peer streaming application, we extensively study the real-world performance of peer-to-peer streaming over a long period of time. Highlights of our measurement study include the topological characterization of large-scale streaming meshes, the statistical characterization of inter-peer bandwidth availability, and the investigation of server capacity utilization in peer-to-peer streaming. Utilizing in-depth insights from our measurements, we design practical algorithms that advance the performance of key protocols in peer-to-peer live streaming. We show that our optimizing solutions fulfill their design objectives in various realistic scenarios, using extensive simulations and experiments.
94

Predicting the content of peer-to-peer interactions

Besana, Paolo January 2009 (has links)
Software agents interact to solve tasks, the details of which need to be described in a language understandable by all the actors involved. Ontologies provide a formalism for defining both the domain of the task and the terminology used to describe it. However, finding a shared ontology has proved difficult: different institutions and developers have different needs and formalise them in different ontologies. In a closed environment it is possible to force all the participants to share the same ontology, while in open and distributed environments ontology mapping can provide interoperability between heterogeneous interacting actors. However, conventional mapping systems focus on acquiring static information, and on mapping whole ontologies, which is infeasible in open systems. This thesis shows a different approach to the problem of heterogeneity. It starts from the intuitive idea that when similar situations arise, similar interactions are performed. If the interactions between actors are specified in formal scripts, shared by all the participants, then when the same situation arises, the same script is used. The main hypothesis that this thesis aims to demonstrate is that by analysing different runs of these scripts it is possible to create a statistical model of the interactions, that reflect the frequency of terms in messages and of ontological relations between terms in different messages. The model is then used during a run of a known interaction to compute the probability distribution for terms in received messages. The probability distribution provides additional information, contextual to the interaction, that can be used by a traditional ontology matcher in order to improve efficiency, by reducing the comparisons to the most likely ones given the context, and possibly both recall and precision, in particular helping disambiguation. The ability to create a model that reflects real phenomena in this sort of environment is evaluated by analysing the quality of the predictions, in particular verifying how various features of the interactions, such as their non-stationarity, affect the predictions. The actual improvements to a matcher we developed are also evaluated. The overall results are very promising, as using the predictor can lower the overall computation time for matching by ten times, while maintaining or in some cases improving recall and precision.
95

Download Time Reduction Using Recent Performance-Biased Peer Replacement In Stochastic P2P Content Delivery Networks

Wilkins, Richard S. 01 January 2013 (has links)
Peer-to-peer networks are a common methodology used for content delivery and data sharing on the Internet. The duration of any particular download session is highly dependent on the capacity of the node servers selected as source peers. Recent investigations have shown that specific total download times may deviate significantly from average total download times. This work investigates a novel approach to selecting download server peers that is intended to reduce average total download times and minimize variance of those times from the overall average of download session durations. Typical algorithms used in today's Peer-to-Peer (P2P) systems have improved from simply connecting to a single source peer for the entire download session to an approach where the download content is divided into chunks and a randomly selected source peer is chosen as the source for each chunk. This limits the possibility of an extremely long download session due to the selection of a low capacity download source peer. Prior work has demonstrated that it is better to divide the download session into time slices and download as much as possible from a randomly selected source peer within each time interval rather than staying connected to a poorly performing source peer for the entire time it takes to download a chunk of the source content. This approach minimizes the time spent with a poorly performing source peer and allows the client to move on to a potentially better performing source after a specific, limited interval. Other work has shown that by keeping a history of node performance, it is possible to recognize a poorly performing partner early in a time slice and depart to a new, and hopefully better performing partner, prior to the completion of a time slice (referred to as "choking" a poorly performing peer). These approaches have been shown to reduce average download times and minimize the variance in duration between download sessions. Prior work has also shown that there is value in the use of parallel download streams. As long as the number of streams is kept small (i.e. 4 to 6), minimizing the overhead of maintaining numerous streams and saturation of the available incoming network bandwidth, average download time can be further reduced. The algorithm described in this investigation uses time-based source peer switching and maintains a small number of parallel download streams. At the end of each time interval, it does not randomly replace all source peers but keeps those source peers that are performing relatively better and replaces those performing relatively poorly with randomly selected new source partners. In this way, as the download progresses, parallel downloading typically progresses from a set of better and better performing source partners, therefore reducing average download times and reducing overall variance between download times. This approach has been shown in simulations to reduce average download times by nearly 20% over parallel versions of basic time-based downloads and 8.7% over time-based downloads with the addition of "choking" poorly performing partners. As file sizes increase and/or the heterogeneity of source node service capacities increase, the benefits of this approach to source peer replacement also increases. These improvements are gained while maintaining or further limiting the variance in performance between download sessions. The reduction in average download times and the decrease in variance between download sessions provided by this approach will improve the user experience for users of P2P content distribution systems. This approach should also be applicable to other data distribution systems such as distributed file systems.
96

Performance Challenges and Optimization Potential of Peer-to-Peer Overlay Technologies / Leistungsanforderungen und Optimierungspotential von Peer-to-Peer Overlay-Technologien

Oechsner, Simon January 2010 (has links) (PDF)
In today's Internet, building overlay structures to provide a service is becoming more and more common. This approach allows for the utilization of client resources, thus being more scalable than a client-server model in this respect. However, in these architectures the quality of the provided service depends on the clients and is therefore more complex to manage. Resource utilization, both at the clients themselves and in the underlying network, determine the efficiency of the overlay application. Here, a trade-off exists between the resource providers and the end users that can be tuned via overlay mechanisms. Thus, resource management and traffic management is always quality-of-service management as well. In this monograph, the three currently significant and most widely used overlay types in the Internet are considered. These overlays are implemented in popular applications which only recently have gained importance. Thus, these overlay networks still face real-world technical challenges which are of high practical relevance. We identify the specific issues for each of the considered overlays, and show how their optimization affects the trade-offs between resource efficiency and service quality. Thus, we supply new insights and system knowledge that is not provided by previous work. / Im heutigen Internet werden immer häufiger Overlay-Strukturen aufgebaut, um eine Dienstleistung zu erbringen. Dieser Ansatz ermöglicht die Nutzung von Client-Ressourcen, so dass er in dieser Hinsicht besser skaliert als das Client-Server-Modell. Die Qualität des zur Verfügung gestellten Dienstes hängt nun aber von den Clients ab und ist daher komplizierter zu steuern. Die Ressourcennutzung, sowohl auf den Clients selbst als auch in dem zugrunde liegenden Netzwerk, bestimmt die Effizienz der Overlay-Anwendung. Hier existiert ein Trade-off zwischen Ressourcen-Anbietern und Endkunden, der über Overlay-Mechanismen geregelt werden kann. Daher ist Ressourcenmanagement und Traffic-Management gleichzeitig immer auch Quality-of-Service-Management. In dieser Arbeit werden die drei derzeit am weitesten im Internet verbreiteten und signifikanten Overlay-Typen berücksichtigt. Diese Overlays sind in populären Anwendungen, die erst vor kurzem an Bedeutung gewonnen haben, implementiert. Daher sind diese Overlay-Netze nach wie vor realen technischen Herausforderungen ausgesetzt, die von hoher praktischer Relevanz sind. Die spezifischen Herausforderungen für jedes der betrachteten Overlays werden identifiziert und es wird gezeigt, wie deren Optimierung den Trade-off zwischen Ressourceneffizienz und Service-Qualität beeinflusst. So werden neue Einsichten und Erkenntnisse über diese Systeme gewonnen, die in früheren Arbeiten nicht existieren.
97

The Field of Consumption: Contemporary Dynamics of Status, Capital, and Exchange

Dubois, Emilie January 2015 (has links)
Thesis advisor: Paul Gray / This dissertation analyzes the field of consumption to provide an analysis of Bourdieusian cultural capital. Bourdieu introduced cultural capital to express the summed effect of intergenerational and personal institutional credentials on economic structure (1986). The three articles of this dissertation – Imagining Class, Precariat Production, and New Cultures of Connection – take up the study of cultural capital in a contemporary, American context among Millennial consumers (Bourdieu 1984, 1993). These cases analyze producer and consumer experiences within the capital markets for durable goods, labor, and a barter market for services. The experiences under analysis include the design and purchase of luxury clothing, the selling of labor to temporary employers, and the barter of unlike services for a like medium of exchange. The analyses build upon Bourdieu’s concept of cultural capital by tracing its role and evolution through producer and consumer exchanges in the “consumer field” (Bourdieu; 1984, 1986, 1993). The analysis of this dissertation relies on semi-structured interview, ethnographic, and survey data. In total, 96 semi-structured interviews were conducted during the data collection for the three articles. Interview data is supported by economic survey and ethnographic data for research participants. Imagining Class melds postmodern and Veblenian consumer theory through informant narratives of the cynical and strategic production of conspicuous consumption enacted by both producers and consumers of the clothing brand Prep Outfitters (Featherstone 1991; Veblen 1899/1994). Upwardly mobile, young consumers believe that performing an elite lifestyle is a condition upon which financial services career success rests. The shared belief is correlated with income increases and results in an environment of aesthetic and lifestyle conformity on Wall Street. Precariat Production analyzes the motivational aspects and economic benefits of collaborative production work within the online platforms of Airbnb, RelayRides, and Taskrabbit and provides insight into the nature of the new working precariat class (Standing 2009). Analysis shows that three central motivational categories drive participation: money, efficiency / environmental, and workplace flexibility. Possession of economic assets prior to beginning work as a collaborative producer is a key characteristic associated with high earning within the precarious, collaborative marketplace, yet cultural capital is not a significant correlate of high income relative to the labor market. Further, those who enjoy the most economic success within the collaborative marketplace as “high earners” are also most likely to express that a motivation of “efficiency environmental” drives their production. The efficiency/environmental motivational finding lend a broader support for the claim of an evolution of high cultural capital expressions of ecohabitus (Bourdieu 1984; Schor et al. 2014). New Cultures of Connection evaluates the exchanges made on an egalitarian barter market through the medium of a local currency, “time dollars.” The study uses Zelizer’s concept of a circuit of commerce (2005) to show that cultural capital limits potential trades available in the time bank and reveals that those with high cultural capital exit the market. Ecohabitus provides one exception to this finding as high cultural capital participants find nonmonetary value in authenticity, localism, environmentalism, holistic wellness and self-reliance. Yet, this new set of high cultural capital preferences does not pair with their exchanges as they demonstrate enduring inclination towards professionalized, market-like services. Disparities in cultural capital challenge the potential of barter networks like the “time bank” to alter the dependence of identities of market practice and success. / Thesis (PhD) — Boston College, 2015. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Sociology.
98

Towards efficient distributed search in a peer-to-peer network.

January 2007 (has links)
Cheng Chun Kong. / Thesis submitted in: November 2006. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 62-64). / Abstracts in English and Chinese. / Abstract --- p.1 / 槪要 --- p.2 / Acknowledgement --- p.3 / Chapter 1. --- Introduction --- p.5 / Chapter 2. --- Literature Review --- p.10 / Chapter 3. --- Design / Chapter A. --- Overview --- p.22 / Chapter B. --- Basic idea --- p.23 / Chapter C. --- Follow-up design --- p.30 / Chapter D. --- Summary --- p.40 / Chapter 4. --- Experimental Findings / Chapter A. --- Goal --- p.41 / Chapter B. --- Analysis Methodology --- p.41 / Chapter C. --- Validation --- p.47 / Chapter D. --- Results --- p.47 / Chapter 5. --- Deployment / Chapter A. --- Limitations --- p.58 / Chapter B. --- Miscellaneous Design Issues --- p.59 / Chapter 6. --- Future Directions and Conclusions --- p.61 / Reference --- p.62 / Appendix --- p.65
99

Performance and availability analysis of BitTorrent-like file sharing systems.

January 2006 (has links)
Fan Bin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 72-76). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Motivation --- p.3 / Chapter 1.3 --- Our Contribution --- p.5 / Chapter 1.4 --- Structure of the thesis --- p.7 / Chapter 2 --- Related Work --- p.8 / Chapter 2.1 --- Measurement Based Studies --- p.8 / Chapter 2.2 --- Analytical Modeling of Bit Torrent System --- p.9 / Chapter 2.3 --- Fairness and Incentive Mechanism --- p.11 / Chapter 3 --- Scalability --- p.12 / Chapter 3.1 --- Analytical Model --- p.12 / Chapter 3.2 --- Steady-State Performance Measures --- p.18 / Chapter 3.3 --- Model Validation and Evaluation --- p.22 / Chapter 3.4 --- Model Extension For Peers behind Firewalls --- p.28 / Chapter 3.5 --- Summary --- p.32 / Chapter 4 --- File Availability --- p.33 / Chapter 4.1 --- Modeling the File Availability --- p.34 / Chapter 4.2 --- Performance of Different Chunk Selection Algorithms --- p.38 / Chapter 4.3 --- Summary --- p.42 / Chapter 5 --- Fairness --- p.44 / Chapter 5.1 --- Mathematical Model --- p.45 / Chapter 5.1.1 --- The Generic Model of Uplink Sharing --- p.45 / Chapter 5.1.2 --- A Dynamic Model of Multiple Classes of Peers --- p.46 / Chapter 5.1.3 --- Performance Metric --- p.47 / Chapter 5.1.4 --- Fairness Metric --- p.49 / Chapter 5.2 --- Rate Assignment Strategies --- p.51 / Chapter 5.2.1 --- Uploading Rate --- p.51 / Chapter 5.2.2 --- Rate Assignment for Optimal Downloading Time --- p.51 / Chapter 5.2.3 --- Rate Assignment for Optimal Fairness --- p.53 / Chapter 5.2.4 --- Rate Assignment for Max-min Allocation --- p.54 / Chapter 5.2.5 --- Performance and Fairness Comparison --- p.56 / Chapter 5.3 --- A Family of Distributed Algorithms --- p.58 / Chapter 5.3.1 --- Selective Uploading --- p.60 / Chapter 5.3.2 --- Non-discriminative Uploading --- p.62 / Chapter 5.3.3 --- Design Knobs --- p.63 / Chapter 5.4 --- Performance Evaluation --- p.63 / Chapter 5.5 --- Summary --- p.69 / Chapter 6 --- Conclusion --- p.70 / Bibliography --- p.72 / Chapter A --- Proof of Theorem 3.1 --- p.77 / Chapter B --- Proof of Theorem 5.2 --- p.81
100

Bandwidth-efficient video streaming with network coding on peer-to-peer networks

Huang, Shenglan January 2017 (has links)
Over the last decade, live video streaming applications have gained great popularity among users but put great pressure on video servers and the Internet. In order to satisfy the growing demands for live video streaming, Peer-to-Peer(P2P) has been developed to relieve the video servers of bandwidth bottlenecks and computational load. Furthermore, Network Coding (NC) has been proposed and proved as a significant breakthrough in information theory and coding theory. According to previous research, NC not only brings substantial improvements regarding throughput and delay in data transmission, but also provides innovative solutions for multiple issues related to resource allocation, such as the coupon-collection problem, allocation and scheduling procedure. However, the complex NC-driven P2P streaming network poses substantial challenges to the packet scheduling algorithm. This thesis focuses on the packet scheduling algorithm for video multicast in NC-driven P2P streaming network. It determines how upload bandwidth resources of peer nodes are allocated in different transmission scenarios to achieve a better Quality of Service(QoS). First, an optimized rate allocation algorithm is proposed for scalable video transmission (SVT) in the NC-based lossy streaming network. This algorithm is developed to achieve the tradeoffs between average video distortion and average bandwidth redundancy in each generation. It determines how senders allocate their upload bandwidth to different classes in scalable data so that the sum of the distortion and the weighted redundancy ratio can be minimized. Second, in the NC-based non-scalable video transmission system, the bandwidth ineffi- ciency which is caused by the asynchronization communication among peers is reduced. First, a scalable compensation model and an adaptive push algorithm are proposed to reduce the unrecoverable transmission caused by network loss and insufficient bandwidth resources. Then a centralized packet scheduling algorithm is proposed to reduce the unin- formative transmission caused by the asynchronized communication among sender nodes. Subsequently, we further propose a distributed packet scheduling algorithm, which adds a critical scalability property to the packet scheduling model. Third, the bandwidth resource scheduling for SVT is further studied. A novel multiple- generation scheduling algorithm is proposed to determine the quality classes that the receiver node can subscribe to so that the overall perceived video quality can be maxi- mized. A single generation scheduling algorithm for SVT is also proposed to provide a faster and easier solution to the video quality maximization function. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and their performance is evaluated via comprehensive simulations. We have demon- strated, by adjusting the conventional transmission model and involving new packet scheduling models, the overall QoS and bandwidth efficiency are dramatically improved. In non-scalable video streaming system, the maximum video quality gain can be around 5dB compared with the random push method, and the overall uninformative transmiss- sion ratio are reduced to 1% - 2%. In scalable video streaming system, the maximum video quality gain can be around 7dB, and the overall uninformative transmission ratio are reduced to 2% - 3%.

Page generated in 0.051 seconds