• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 9
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measuring program similarity for efficient benchmarking and performance analysis of computer systems

Phansalkar, Aashish S. 28 August 2008 (has links)
Computer benchmarking involves running a set of benchmark programs to measure performance of a computer system. Modern benchmarks are developed from real applications. Applications are becoming complex and hence modern benchmarks run for a very long time. These benchmarks are also used for performance evaluation in the early design phase of microprocessors. Due to the size of benchmarks and increase in complexity of microprocessor design, the effort required for performance evaluation has increased significantly. This dissertation proposes methodologies to reduce the effort of benchmarking and performance evaluation of computer systems. Identifying a set of programs that can be used in the process of benchmarking can be very challenging. A solution to this problem can start by identifying similarity between programs to capture the diversity in their behavior before they can be considered for benchmarking. The aim of this methodology is to identify redundancy in the set of benchmarks and find a subset of representative benchmarks with the least possible loss of information. This dissertation proposes the use of program characteristics which capture the performance behavior of programs and identifies representative benchmarks applicable over a wide range of system configurations. The use of benchmark subsetting has not been restricted to academic research. Recently, the SPEC CPU subcommittee used the information derived from measuring similarity based on program behavior characteristics between different benchmark candidates as one of the criteria for selecting the SPEC CPU2006 benchmarks. The information of similarity between programs can also be used to predict performance of an application when it is difficult to port the application on different platforms. This is a common problem when a customer wants to buy the best computer system for his application. Performance of a customer's application on a particular system can be predicted using the performance scores of the standard benchmarks on that system and the similarity information between the application and the benchmarks. Similarity between programs is quantified by the distance between them in the space of the measured characteristics, and is appropriately used to predict performance of a new application using the performance scores of its neighbors in the workload space. / text
2

Performance modelling of message-passing parallel programs

Grove, Duncan A. January 2003 (has links) (PDF)
This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention.
3

Performance modelling of message-passing parallel programs

Grove, Duncan A. January 2003 (has links)
Electronic publication; full text available in PDF format; abstract in HTML format. This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention. Electronic reproduction.[Australia] :Australian Digital Theses Program,2001. xvii, 295 p. : ill., charts (col.) ; 30 cm.
4

Evaluation by simulation of queueing network models of multiprogrammed computer systems / by Lewis Neale Lester

Lester, Lewis Neale January 1980 (has links)
Typescript (photocopy) / xvi 239 leaves : ill., charts ; 31 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computing Science, 1982
5

Performance characteristics of data base machines by Randall Lee Meharg.

Meharg, Randall Lee January 2010 (has links)
Photocopy of typescript. / Digitized by Kansas Correctional Industries
6

Regenerative techniques for estimating performance measures of highly dependable systems with repairs

Shultes, Bruce Chase 08 1900 (has links)
No description available.
7

System development : an algorithmic approach

Weingartner, Stephan G. January 1987 (has links)
The subject chosen to develop this thesis project on is developing an algorithm or methodology for system selection. The specific problem studied involves a procedure to determine anion computer system alternative is the best choice for a given user situation.The general problem to be addressed is the need for one to choose computing hardware, software, systems, or services in a -Logical approach from a user perspective, considering cost, performance and human factors. Most existing methods consider only cost and performance factors, combining these factors in ad hoc, subjective fashions to react: a selection decision. By not considering factors treat measure effectiveness and functionality of computer services for a user, existing methods ignore some of the most important measures of value to the user.In this work, a systematic and comprehensive approach to computer system selection has been developed. Also developed were methods for selecting and organizing various criteria.Also ways to assess the importance and value of different service attributes to a end-user are discussed.Finally, the feasibility of a systematic approach to computer system selection has been proven by establishing a general methodology and by proving it through a demonstration of a specific application.
8

The Analysis of computer systems for performance optimisation.

Meiring, Pierre Andre. January 1987 (has links)
The project investigated the problem of performance optimisation of computer systems at the systems level. It was ascertained that no generally accepted technique for approaching this problem exist. A theoretical approach was thus developed which describes the system, the workload and the performance in terms of matrices which are deduced from measured data. An attempt is then made to verify this theory by applying it to a real system in a controlled environment. A dummy workload is used and measurements are made on the computer system for various configurations. The results thus obtained are compared with the expected trends in system performance and conclusions are drawn which appear to verify the validity of the theory proposed. / Thesis (M.Sc.Eng.)-University of Natal, 1987.
9

CLUE: A Cluster Evaluation Tool

Parker, Brandon S. 12 1900 (has links)
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.

Page generated in 0.1668 seconds