• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 804
  • 804
  • 267
  • 221
  • 149
  • 145
  • 114
  • 97
  • 88
  • 80
  • 78
  • 75
  • 72
  • 72
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Evaluation of Globe Location Service Performance

Reynisson, Gauti January 2000 (has links)
Performance evaluation of Globe’s location service is becoming necessary in order to help steer development in the right direction. In this paper I put the current implementation of the location service to work and design and setup a number of tests with input data from a mobile phone environment provided by the Stanford University Mobile Activity Traces (SUMATRA). It turns out that the implementation is not ready for performance evaluation of this scale after all, and that no performance evaluation can be done with SUMATRA since too many inconsistencies are to be found in that data.
122

Secure and Distributed Multicast Address Allocation on IPv6 Networks

Slaviero, Marco Lorenzo 10 February 2005 (has links)
Address allocation has been a limiting factor in the deployment of multicast solutions, and, as other multicast technologies advance, a general solution to this problem becomes more urgent. This study examines the current state of address allocation and finds impediments in many of the proposed solutions. A number of the weaknesses can be traced back to the rapidly ageing Internet Protocol version 4, and therefore it was decided that a new approach is required. A central part of this work relies on the newer Internet Protocol version 6, specifically the Unicast prefix based multicast address format. The primary aim of this study was to develop an architecture for secure distributed IPv6 multicast address allocation. The architecture should be usable by client applications to retrieve addresses which are globally unique. The product of this work was the Distributed Allocation Of Multicast Addresses Protocol, or DAOMAP. It is a system whichcan be deployed on nodes which wish to take part in multicast address allocation and an implementation was developed. Analysis and simulations determined that the devised model fitted the stated requirements, and security testing determinedthat DAOMAP was safe from a series of attacks. / Dissertation (MSc (Computer Science))--University of Pretoria, 2006. / Computer Science / unrestricted
123

Architecture for a Fully Decentralized Peer-to-Peer Collaborative Computing Platform

Wilson, Dany January 2015 (has links)
We present an architecture for a fully decentralized peer-to-peer collaborative computing platform, offering services similar to Cloud Service Provider’s Platform-as-a-Service (PaaS) model, using volunteered resources rather than dedicated resources. This thesis is motivated by three research questions: (1) Is it possible to build a peer-to-peer col- laborative system using a fully decentralized infrastructure relying only on volunteered resources?, (2) How can light virtualization be used to mitigate the complexity inherent to the volunteered resources?, and (3) What are the minimal requirements for a computing platform similar to the PaaS cloud computing platform? We propose an architecture composed of three layers: the Network layer, the Virtual layer, and the Application layer. We also propose to use light virtualization technologies, or containers, to provide a uniform abstraction of the contributing resources and to isolate the host environment from the contributed environment. Then, we propose a minimal API specification for this computing platform, which is also applicable to PaaS computing platforms. The findings of this thesis corroborate the hypothesis that peer-to-peer collaborative systems can be used as a basis for developing volunteer cloud computing infrastructures. We outline the implications of using light virtualization as an integral virtualization primitive in public distributed computing platform. Finally, this thesis lays out a starting point for most volunteer cloud computing infrastructure development effort, because it circumscribes the essential requirements and presents solutions to mitigate the complexities inherent to this paradigm.
124

Kademlia on the Open Internet : How to Achieve Sub-Second Lookups in a Multimillion-Node DHT Overlay

Jimenez, Raúl January 2011 (has links)
Distributed hash tables (DHTs) have gained much attention from the research community in the last years. Formal analysis and evaluations on simulators and small-scale deployments have shown good scalability and performance. In stark contrast, performance measurements in large-scale DHT overlays on the Internet have yielded disappointing results, with lookup latencies measured in seconds. Others have attempted to improve lookup performance with very limited success, their lowest median lookup latency at over one second and a long tail of high-latency lookups. In this thesis, the goal is to to enable large-scale DHT-based latency-sensitive applications on the Internet. In particular, we improve lookup latency in Mainline DHT, the largest DHT overlay on the open Internet, to identify and address practical issues on an existing system. Our approach is implementing and measuring backward-compatible modifications to facilitate their incremental adoption into Mainline DHT (and possibly other Kademlia-based overlays). Thus, enabling our research to have impact on a real-world system. Our results close the performance gap between small- and large-scale DHT overlays. With a median lookup latency below 200 ms and a 99\superscript{th} percentile of just above 500 ms, our median lookup latency is one order of magnitude lower than the best performing measurement reported in the literature. Moreover, our results do not show a long tail of high-latency lookups, unlike previous reports. We have achieved these results by studying how connectivity artifacts on the underlying network ---probably caused by firewalls and NAT devices on the Internet--- affect the DHT overlay. Our measurements of the connectivity of more than 3 million nodes reveal that connectivity artifacts are widespread and can severely degrade lookup performance. Scalability and locality-awareness have also been explored in this thesis, where different mechanisms have been proposed. Some of the mechanisms are planned to be integrated into Mainline DHT in future work. / QC 20111118
125

Efficient Data Stream Sampling on Apache Flink / Effektiv dataströmsampling med Apache Flink

Vlachou-Konchylaki, Martha January 2016 (has links)
Sampling is considered to be a core component of data analysis making it possibleto provide a synopsis of possibly large amounts of data by maintainingonly subsets or multisubsets of it. In the context of data streaming, an emergingprocessing paradigm where data is assumed to be unbounded, samplingoffers great potential since it can establish a representative bounded view ofinfinite data streams to any streaming operations. This further unlocks severalbenefits such as sustainable continuous execution on managed memory, trendsensitivity control and adaptive processing tailored to the operations that consumedata streams.The main aim of this thesis is to conduct an experimental study in order tocategorize existing sampling techniques over a selection of properties derivedfrom common streaming use cases. For that purpose we designed and implementeda testing framework that allows for configurable sampling policiesunder different processing scenarios along with a library of different samplersimplemented as operators. We build on Apache Flink, a distributed streamprocessing system to provide this testbed and all component implementationsof this study. Furthermore, we show in our experimental analysis that there isno optimal sampling technique for all operations. Instead, there are differentdemands across usage scenarios such as online aggregations and incrementalmachine learning. In principle, we show that each sampling policy trades offbias, sensitivity and concept drift adaptation, properties that can be potentiallypredefined by different operators.We believe that this study serves as the starting point towards automatedadaptive sampling selection for sustainable continuous analytics pipelines thatcan react to stream changes and thus offer the right data needed at each time,for any possible operation
126

Enabling and Achieving Self-Management for Large Scale Distributed Systems : Platform and Design Methodology for Self-Management

Al-Shishtawy, Ahmad January 2010 (has links)
Autonomic computing is a paradigm that aims at reducing administrative overhead by using autonomic managers to make applications self-managing. To better deal with large-scale dynamic environments; and to improve scalability, robustness, and performance; we advocate for distribution of management functions among several cooperative autonomic managers that coordinate their activities in order to achieve management objectives. Programming autonomic management in turn requires programming environment support and higher level abstractions to become feasible. In this thesis we present an introductory part and a number of papers that summaries our work in the area of autonomic computing. We focus on enabling and achieving self-management for large scale and/or dynamic distributed applications. We start by presenting our platform, called Niche, for programming self-managing component-based distributed applications. Niche supports a network-transparent view of system architecture simplifying designing application self-* code.  Niche provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of structured overlay networks. We have also developed a distributed file storage service, called YASS, to illustrate and evaluate Niche. After introducing Niche we proceed by presenting a methodology and design space for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to the design and development of an improved version of our distributed storage service YASS as a case study. We continue by presenting a generic policy-based management framework which has been integrated into Niche. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using our self-managing file storage application YASS as a case study. Finally, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of resource hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn. / QC 20100520
127

Implementation of Distributed Cloud System Architecture using AdvancedContainer Orchestration, Cloud Storage, and Centralized Database for a Web-based Platform

Karkera, Sohan Sadanand January 2020 (has links)
No description available.
128

Mitigating Distributed Configuration Errors in Cloud Systems

Ma, Sixiang 24 August 2022 (has links)
No description available.
129

dCAMP: Distributed Common API for Measuring Performance

Sideropoulos, Alexander Paul 01 October 2014 (has links) (PDF)
Although the nearing end of Moore’s Law has been predicted numerous times in the past, it will eventually come to pass. In forethought of this, many modern computing systems have become increasingly complex, distributed, and parallel. As software is developed on and for these complex systems, a common API is necessary for gathering vital performance related metrics while remaining transparent to the user, both in terms of system impact and ease of use. Several distributed performance monitoring and testing systems have been proposed and implemented by both research and commercial institutions. However, most of these systems do not meet several fundamental criterion for a truly useful distributed performance monitoring system: 1) variable data delivery models, 2) security, 3) scalability, 4) transparency, 5) completeness, 6) validity, and 7) portability. This work presents dCAMP: Distributed Common API for Measuring Performance, a distributed performance framework built on top of Mark Gabel and Michael Haungs’ work with CAMP. This work also presents an updated and extended set of criterion for evaluating distributed performance frameworks and uses these to evaluate dCAMP and several related works.
130

Distributed Fault Detection for a Class of Large-Scale Nonlinear Uncertain Systems

Zhang, Qi 29 April 2011 (has links)
No description available.

Page generated in 0.0987 seconds