• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 792
  • 792
  • 267
  • 214
  • 149
  • 143
  • 113
  • 97
  • 83
  • 79
  • 77
  • 74
  • 72
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient Ways to Upgrade Docker Containers in Cloud to Support Backward Compatibility : Various Upgrade Strategies to Measure Complexity

MADALA, SRAVYA January 2016 (has links)
If the present world scenario in telecommunication systems is considered thousands of systems are getting moved into the cloud because of its wide features. This thesis explains the efficient ways to upgrade Docker containers in a way to support backward compatibility. It mainly concerns about the high-availability of systems in the cloud environment during upgrades. Smaller changes can be implemented automatically to some extent. The minor changes can be handled by Apache Avro where schema is defined in it. But at some point Avro also cannot handle the situation which becomes much more complex. In a real world example, we need to perform major changes on the top of an application. Here we are testing different upgrade strategies and comparing the code complexity, total time to upgrade, network usage of single upgrades strategy versus multiple upgrade strategy with and without Use of Avro. When code complexity is compared the case without Avro performs well in single upgrade strategy with less time to upgrade all six instances but the network usage is more compared to multiple upgrades. So single upgrade strategy is better to maintain high availability in Cloud by performing the upgrades in an efficient manner.
22

Performance modelling and the representation of large scale distributed system functions

Nyong, Obong Dennis Obot January 1999 (has links)
This thesis presents a resource based approach to model generation for performance characterization and correctness checking of large scale telecommunications networks. A notion called the timed automaton is proposed and then developed to encapsulate behaviours of networking equipment, system control policies and non-deterministic user behaviours. The states of pooled network resources and the behaviours of resource consumers are represented as continually varying geometric patterns; these patterns form part of the data operated upon by the timed automata. Such a representation technique allows for great flexibility regarding the level of abstraction that can be chosen in the modelling of telecommunications systems. None the less, the notion of system functions is proposed to serve as a constraining framework for specifying bounded behaviours and features of telecommunications systems. Operational concepts are developed for the timed automata; these concepts are based on limit preserving relations. Relations over system states represent the evolution of system properties observable at various locations within the network under study. The declarative nature of such permutative state relations provides a direct framework for generating highly expressive models suitable for carrying out optimization experiments. The usefulness of the developed procedure is demonstrated by tackling a large scale case study, in particular the problem of congestion avoidance in networks; it is shown that there can be global coupling among local behaviours within a telecommunications network. The uncovering of such a phenomenon through a function oriented simulation is a contribution to the area of network modelling. The direct and faithful way of deriving performance metrics for loss in networks from resource utilization patterns is also a new contribution to the work area.
23

MAGNET - a dynamic resource management architecture

Kostkova, Patricie January 1999 (has links)
This thesis proposes a new dynamic resource management architecture, Magnet, to meet the requirements of users in flexible and adaptive systems. Computer systems no longer operate in centralized isolated static environments. Technological advances, such as smaller and faster hardware, and higher reliability of networks have resulted in the growth of mobility of computing and the need for run-time reconfigurability. The dynamic management of this diversity of resources is the central issue addressed in this thesis. Applications in environments with frequently changing characteristics are required to participate in dynamic resource management, to adapt to ever-changing conditions, and to express their requirements in terms of quality of service. Magnet enables dynamic trading of resources which can be requested indirectly by the type of service they offer, rather than directly by their name. A dedicated component, the Trader, matches requests for services against demands and establishes a component binding - resource allocation. In addition, the architecture is extensible - it does not constrain the information on services and allows user-customization of the matching process. Consequently, this allows resource definitions to be parametrized (to include QoS-based characteristics), and the matching process to be user-customized (to preform QoS-based negotiation). In order to fulfill the requirements of users relying on ever-changing conditions, Magnet enables runtime adaptation (dynamic rebinding) to changes in the environment, constant monitoring of resources, and scalability of the architecture. The generality of the Magnet architecture is illustrated with several examples of resource allocation in dynamic environments.
24

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors. The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology. The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang. Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems. No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson. Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.
25

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
<p>The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors.</p><p>The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology.</p><p>The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang.</p><p>Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems.</p><p>No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson.</p><p>Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.</p>
26

On contention management for data accesses in parallel and distributed systems

Yu, Xiao 08 June 2015 (has links)
Data access is an essential part of any program, and is especially critical to the performance of parallel computing systems. The objective of this work is to investigate factors that affect data access parallelism in parallel computing systems, and design/evaluate methods to improve such parallelism - and thereby improving the performance of corresponding parallel systems. We focus on data access contention and network resource contention in representative parallel and distributed systems, including transactional memory system, Geo-replicated transactional systems and MapReduce systems. These systems represent two widely-adopted abstractions for parallel data accesses: transaction-based and distributed-system-based. In this thesis, we present methods to analyze and mitigate the two contention issues. We first study the data contention problem in transactional memory systems. In particular, we present a queueing-based model to evaluate the impact of data contention with respect to various system configurations and workload parameters. We further propose a profiling-based adaptive contention management approach to choose an optimal policy across different benchmarks and system platforms. We further develop several analytical models to study the design of transactional systems when they are Geo-replicated. For the network resource contention issue, we focus on data accesses in distributed systems and study opportunities to improve upon the current state-of-art MapReduce systems. We extend the system to better support map task locality for dual-map-input applications. We also study a strategy that groups input blocks within a few racks to balance the locality of map and reduce tasks. Experiments show that both mechanisms significantly reduce off-rack data communication and thus alleviate the resource contention on top-rack switch and reduce job execution time. In this thesis, we show that both the data contention and the network resource contention issues are key to the performance of transactional and distributed data access abstraction and our mechanisms to estimate and mitigate such problems are effective. We expect our approaches to provide useful insight on future development and research for similar data access abstractions and distributed systems.
27

Distributed trigger counting algorithms

Casas, Juan Manual, 1978- 21 February 2011 (has links)
A distributed system consists of a set of N processor nodes and a finite set of communication channels. It is frequently described as a directed graph in which each vertex represents a processor node and the edges represent the communication channels. A global snapshot of a distributed system consists of the local states of all the processor nodes and all of the in-transit messages of a distributed computation. This is meaningful as it corresponds to the global state where all the local states and communication channels of all the processor nodes in the system are recorded simultaneously. A classic example where snapshots are utilized is in the scenario of some failure where the system can restart from the last global snapshot. This is an important application of global snapshot algorithms as it forms the basis for fault-tolerance in distributed programs and aids in serviceability as a distributed program debugging mechanism. Another important application includes checkpointing and monitoring systems where a set of continuous global snapshots are employed to detect when a certain number of triggers have been received by the system. When the distributed system is scaled in terms of an increase in the number of processor nodes and an increase in the number of expected triggers the message complexity increases and impacts the total overhead for the communication and computation of the global snapshot algorithm. In such a large distributed system, an optimal algorithm is vital so that the distributed application program that is employing the snapshots does not suffer from performance degradation as the size of the distributed system continues to grow over time. We are interested in global snapshot algorithms that offer lower bound message complexity and lower bound MaxLoad messages for large values of N processor nodes and large values of W expected triggers. In this report we study and simulate the Centralized, Grid based, Tree Based, and LayeredRand global snapshot algorithms then evaluate the algorithms for total number of messages (sent and received) and MaxLoad messages (sent and received) for the trigger counting problem in distributed computing. The report concludes with simulation results that compare the performance of the algorithms with respect to the total number of messages and MaxLoad messages required by each algorithm to detect when the number of W triggers have been delivered to the distributed system. / text
28

Performance modelling of replication protocols

Misra, Manoj January 1997 (has links)
This thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.
29

ink - An HTTP Benchmarking Tool

Phelps, Andrew Jacob 15 June 2020 (has links)
The Hypertext Transfer Protocol (HTTP) is one the foundations of the modern Internet. Because HTTP servers may be subject to unexpected periods of high load, developers use HTTP benchmarking utilities to simulate the load generated by users. However, many of these tools do not report performance details at a per-client level, which deprives developers of crucial insights into a server's performance capabilities. In this work, we present ink, an HTTP benchmarking tool that enables developers to better understand server performance. ink provides developers with a way of visualizing the level of service that each individual client receives. It does this by recording a trace of events for each individual simulated client. We also present a GUI that enables users to explore and visualizing the data that is generated by an HTTP benchmark. Lastly, we present a method for running HTTP benchmarks that uses a set of distributed machines to scale up the achievable load on the benchmarked server. We evaluate ink by performing a series of case studies to show that ink is both performant and useful. We validate ink's load generation abilities within the context of a single machine and when using a set of distributed machines. ink is shown to be capable of simulating hundreds of thousands of HTTP clients and presenting per-client results through the ink GUI. We also perform a set of HTTP benchmarks where ink is able to highlight performance issues and differences between server implementations. We compare servers like NGINX and Apache and highlight their differences using ink. / Master of Science / The World Wide Web (WWW) uses the Hypertext Transfer Protocol to send web content such as HTML pages or video to users. The servers providing this content are called HTTP servers. Sometimes, the performance of these HTTP servers is compromised because a large number of users requests documents at the same time. To prepare for this, server maintainers test how many simultaneous users a server can handle by using benchmarking utilities. These benchmarking utilities work by simulating a set of clients. Currently, these tools focus only on the amount of requests that a server can process per second. Unfortunately, this coarse-grained metric can hide important information, such as the level of service that individual clients received. In this work, we present ink, an HTTP benchmarking utility we developed that focuses on reporting information for each simulated client. Reporting data in this way allows for the developer to see how well each client was served during the benchmark. We achieve this by constructing data visualizations that include a set of client timelines. Each of these timelines represents the service that one client received. We evaluated ink through a series of case studies. These focus on the performance of the utility and the usefulness of the visualizations produced by ink. Additionally, we deployed ink in Virginia Tech's Computer Systems course. The students were able to use the tool and took a survey pertaining to their experience with the tool.
30

Simulace distribuovaných systémů / Distributed Systems Simulation

Ďuriš, Anton January 2021 (has links)
This thesis is focused on distributed systems modeling using Petri nets. Distributed systems are increasingly being implemented in applications and computing systems, where their task is to ensure sufficient performance and stability for a large number of its users. When modeling a distributed systems, stochastic behavior of Petri nets is important, which will provide more realistic simulations. Therefore, this thesis focuses mainly on timed Petri nets. The theoretical part of this thesis summarizes distributed systems, their properties, types and available architectures, as well as Petri nets, their representation, types and the principle of an operation. In the practical part, two models were implemented, namely a horizontally scaled web application divided into several services with a distributed database and a large grid computing system, more precisely the BOINC platform with the Folding@home project. Both models were implemented using the PetNetSim library of Python. The goal of this thesis is to perform simulations on the created models for different scenarios of their behavior.

Page generated in 0.1157 seconds