• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 263
  • 21
  • 20
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 719
  • 719
  • 256
  • 182
  • 139
  • 131
  • 97
  • 91
  • 76
  • 69
  • 66
  • 66
  • 64
  • 62
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
<p>The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors.</p><p>The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology.</p><p>The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang.</p><p>Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems.</p><p>No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson.</p><p>Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.</p>
22

On contention management for data accesses in parallel and distributed systems

Yu, Xiao 08 June 2015 (has links)
Data access is an essential part of any program, and is especially critical to the performance of parallel computing systems. The objective of this work is to investigate factors that affect data access parallelism in parallel computing systems, and design/evaluate methods to improve such parallelism - and thereby improving the performance of corresponding parallel systems. We focus on data access contention and network resource contention in representative parallel and distributed systems, including transactional memory system, Geo-replicated transactional systems and MapReduce systems. These systems represent two widely-adopted abstractions for parallel data accesses: transaction-based and distributed-system-based. In this thesis, we present methods to analyze and mitigate the two contention issues. We first study the data contention problem in transactional memory systems. In particular, we present a queueing-based model to evaluate the impact of data contention with respect to various system configurations and workload parameters. We further propose a profiling-based adaptive contention management approach to choose an optimal policy across different benchmarks and system platforms. We further develop several analytical models to study the design of transactional systems when they are Geo-replicated. For the network resource contention issue, we focus on data accesses in distributed systems and study opportunities to improve upon the current state-of-art MapReduce systems. We extend the system to better support map task locality for dual-map-input applications. We also study a strategy that groups input blocks within a few racks to balance the locality of map and reduce tasks. Experiments show that both mechanisms significantly reduce off-rack data communication and thus alleviate the resource contention on top-rack switch and reduce job execution time. In this thesis, we show that both the data contention and the network resource contention issues are key to the performance of transactional and distributed data access abstraction and our mechanisms to estimate and mitigate such problems are effective. We expect our approaches to provide useful insight on future development and research for similar data access abstractions and distributed systems.
23

Performance modelling of replication protocols

Misra, Manoj January 1997 (has links)
This thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.
24

ink - An HTTP Benchmarking Tool

Phelps, Andrew Jacob 15 June 2020 (has links)
The Hypertext Transfer Protocol (HTTP) is one the foundations of the modern Internet. Because HTTP servers may be subject to unexpected periods of high load, developers use HTTP benchmarking utilities to simulate the load generated by users. However, many of these tools do not report performance details at a per-client level, which deprives developers of crucial insights into a server's performance capabilities. In this work, we present ink, an HTTP benchmarking tool that enables developers to better understand server performance. ink provides developers with a way of visualizing the level of service that each individual client receives. It does this by recording a trace of events for each individual simulated client. We also present a GUI that enables users to explore and visualizing the data that is generated by an HTTP benchmark. Lastly, we present a method for running HTTP benchmarks that uses a set of distributed machines to scale up the achievable load on the benchmarked server. We evaluate ink by performing a series of case studies to show that ink is both performant and useful. We validate ink's load generation abilities within the context of a single machine and when using a set of distributed machines. ink is shown to be capable of simulating hundreds of thousands of HTTP clients and presenting per-client results through the ink GUI. We also perform a set of HTTP benchmarks where ink is able to highlight performance issues and differences between server implementations. We compare servers like NGINX and Apache and highlight their differences using ink. / Master of Science / The World Wide Web (WWW) uses the Hypertext Transfer Protocol to send web content such as HTML pages or video to users. The servers providing this content are called HTTP servers. Sometimes, the performance of these HTTP servers is compromised because a large number of users requests documents at the same time. To prepare for this, server maintainers test how many simultaneous users a server can handle by using benchmarking utilities. These benchmarking utilities work by simulating a set of clients. Currently, these tools focus only on the amount of requests that a server can process per second. Unfortunately, this coarse-grained metric can hide important information, such as the level of service that individual clients received. In this work, we present ink, an HTTP benchmarking utility we developed that focuses on reporting information for each simulated client. Reporting data in this way allows for the developer to see how well each client was served during the benchmark. We achieve this by constructing data visualizations that include a set of client timelines. Each of these timelines represents the service that one client received. We evaluated ink through a series of case studies. These focus on the performance of the utility and the usefulness of the visualizations produced by ink. Additionally, we deployed ink in Virginia Tech's Computer Systems course. The students were able to use the tool and took a survey pertaining to their experience with the tool.
25

Simulace distribuovaných systémů / Distributed Systems Simulation

Ďuriš, Anton January 2021 (has links)
This thesis is focused on distributed systems modeling using Petri nets. Distributed systems are increasingly being implemented in applications and computing systems, where their task is to ensure sufficient performance and stability for a large number of its users. When modeling a distributed systems, stochastic behavior of Petri nets is important, which will provide more realistic simulations. Therefore, this thesis focuses mainly on timed Petri nets. The theoretical part of this thesis summarizes distributed systems, their properties, types and available architectures, as well as Petri nets, their representation, types and the principle of an operation. In the practical part, two models were implemented, namely a horizontally scaled web application divided into several services with a distributed database and a large grid computing system, more precisely the BOINC platform with the Folding@home project. Both models were implemented using the PetNetSim library of Python. The goal of this thesis is to perform simulations on the created models for different scenarios of their behavior.
26

Improving the Selection of Surrogates During the Cold-Start Phase of a Cyber Foraging Application to Increase Application Performance

Kowalczk, Brian 31 August 2014 (has links)
Mobile devices are generally less powerful and more resource constrained than their desktop counterparts are, yet many of the applications that are of the most value to users of mobile devices are resource intensive and difficult to support on a mobile device. Applications such as games, video playback, image processing, voice recognition, and facial recognition are resource intensive and often exceed the limits of mobile devices. Cyber foraging is an approach that allows a mobile device to discover and utilize surrogate devices present in the local environment to augment the capabilities of the mobile device. Cyber foraging has been shown to be beneficial in augmenting the capabilities of mobile devices to conserve power, increase performance, and increase the fidelity of applications. The cyber foraging scheduler determines what operation to execute remotely and what surrogate to use to execute the operation. Virtually all cyber foraging schedulers in use today utilize historical data in the scheduling algorithm. If historical data about a surrogate is unavailable, execution history must be generated before the scheduler's algorithm can utilize the surrogate. The period between the arrival time of a surrogate and when historical data become available is called the cold-start state. The cold-start state delays the utilization of potentially beneficial surrogates and can degrade system performance. The major contribution of this research was the extension of a historical-based prediction algorithm into a low-overhead estimation-enhanced algorithm that eliminated the cold-start state. This new algorithm performed better than the historical and random scheduling algorithms in every operational scenario. The four operational scenarios simulated typical use-cases for a mobile device. The scenarios simulated an unconnected environment, an environment where every surrogate was available, an environment where all surrogates were initially unavailable and surrogates joined the system slowly over time, and an environment where surrogates randomly and quickly joined and departed the system. One future research possibility is to extend the heuristic to include storage system I/O performance. Additional extensions include accounting for architectural differences between CPUs and the utilization of Bayesian estimates to provide metrics based upon performance specifications rather than direct
27

The control of flexible robots

Shifman, Jeffrey Joseph January 1991 (has links)
No description available.
28

Specification and proof in real-time systems

Davies, Jim January 1991 (has links)
No description available.
29

Distributed Estimation of a class of Nonlinear Systems

Park, Derek Heungyoul 12 December 2012 (has links)
This thesis proposes a distributed observer design for a class of nonlinear systems that arise in the application of model reduction techniques. Distributed observer design techniques have been proposed in the literature to address estimation problems over sensor networks. In large complex sensor networks, an efficient technique that minimizes the extent of the required communication is highly desirable. This is especially true when sensors have problems caused by physical limitations that result in incorrect information at the local level affecting the estimation of states globally. To address this problem, scalable algorithms for a suitable distributed observer have been developed. Most algorithms are focussed on large linear dynamical systems and they are not directly generalizable to nonlinear systems. In this thesis, scalable algorithms for distributed observers are proposed for a class of large scale observable nonlinear system. Distributed systems models multi-agent systems in which each agents attempts to accomplish local tasks. In order to achieve global objectives, there should be agreement regarding some commonly known variables that depend on the state of all agents. These variables are called consensus states. Once identified, such consensus states can be exploited in the development of distributed consensus algorithms. Consensus algorithms are used to develop information exchange protocols between agents such that global objectives are met through local action. In this thesis, a higher order observer is applied in the distributed sensor network system to design a distributed observer for a class nonlinear systems. Fusion of measurement and covariance information is applied to the higher order filter as the first method. The consensus filter is embedded in the local nonlinear observer for fusion of data. The second method is based on the communication of state estimates between neighbouring sensors rather than fusion of data measurement and covariance. The second method is found to reduce disagreement of the states estimation between each sensor. The performance of these new algorithms is demonstrated by simulation, and the second method is effectively applied over the first method. / Thesis (Master, Chemical Engineering) -- Queen's University, 2012-12-12 11:22:49.113
30

Performance Optimization Techniques and Tools for Distributed Graph Processing

Kalavri, Vasiliki January 2016 (has links)
In this thesis, we propose optimization techniques for distributed graph processing. First, we describe a data processing pipeline that leverages an iterative graph algorithm for automatic classification of web trackers. Using this application as a motivating example, we examine how asymmetrical convergence of iterative graph algorithms can be used to reduce the amount of computation and communication in large-scale graph analysis. We propose an optimization framework for fixpoint algorithms and a declarative API for writing fixpoint applications. Our framework uses a cost model to automatically exploit asymmetrical convergence and evaluate execution strategies during runtime. We show that our cost model achieves speedup of up to 1.7x and communication savings of up to 54%. Next, we propose to use the concepts of semi-metricity and the metric backbone to reduce the amount of data that needs to be processed in large-scale graph analysis. We provide a distributed algorithm for computing the metric backbone using the vertex-centric programming model. Using the backbone, we can reduce graph sizes up to 88% and achieve speedup of up to 6.7x. / <p>QC 20160919</p>

Page generated in 0.0479 seconds