Performance testing is a mean used to evaluate speed of software projects. In an ideal state a project has a set of tests attached to it and such set may be repeat- edly executed in order to verify that all performance expectations are satisfied. The most widespread method of constructing these tests nowadays is based on measuring absolute time values. A test executes a chosen application unit and then compares the time it took to complete with a precise bound, which has been determined in advance. However, this approach has several disadvantages that affect reliability of such tests. First of all, the way in which those precise bounds should be established is not clear. And even if it is, then the bounds are tied to a certain hardware configuration. As a remedy, this thesis demonstrates a whole another approach, which is based on relative performance comparison. Using a logic built on top of a research published by the issuing department, chosen application units are compared together in a manner that makes results of such tests more reliable even to a change of hardware configuration. The presented theory is also implemented and verified on selected use cases. 1
Identifer | oai:union.ndltd.org:nusl.cz/oai:invenio.nusl.cz:328126 |
Date | January 2013 |
Creators | Trojánek, Tomáš |
Contributors | Tůma, Petr, Bednárek, David |
Source Sets | Czech ETDs |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/masterThesis |
Rights | info:eu-repo/semantics/restrictedAccess |
Page generated in 0.0021 seconds