• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automated Analysis of Load Testing Results

Jiang, Zhen Ming 29 January 2013 (has links)
Many software systems must be load tested to ensure that they can scale up under high load while maintaining functional and non-functional requirements. Studies show that field problems are often related to systems not scaling to field workloads instead of feature bugs. To assure the quality of these systems, load testing is a required testing procedure in addition to conventional functional testing procedures, such as unit and integration testing. Current industrial practices for checking the results of a load test remain ad-hoc, involving high-level manual checks. Few research efforts are devoted to the automated analysis of load testing results, mainly due to the limited access to large scale systems for use as case studies. Approaches for the automated and systematic analysis of load tests are needed, as many services are being offered online to an increasing number of users. This dissertation proposes automated approaches to assess the quality of a system under load by mining some of the recorded load testing data (execution logs). Execution logs, which are readily available yet rarely used, are generated by output statements which developers insert into the source code. Execution logs are hard to parse and analyze automatically due to their free-form structure. We first propose a log abstraction approach that uncovers the internal structure of each log line. Then we propose automated approaches to assess the quality of a system under load by deriving various models (functional, performance and reliability models) from the large set of execution logs. Case studies show that our approaches scale well to large enterprise and open source systems and output high precision results that help load testing practitioners effectively analyze the quality of the system under load. / Thesis (Ph.D, Computing) -- Queen's University, 2013-01-26 22:58:29.881
2

Software Performance Prediction : using SPE

Gyarmati, Erik, Stråkendal, Per January 2002 (has links)
Performance objectives are often neglected during the design phase of a project, and performance problems are often not discovered until the system is implemented. Therefore, there is a need from the industry to find a method to predict the performance of a system early in the design phase. One method that tries to solve this problem is the Software Performance Engineering (SPE) method. This report gives a short introduction to software performance and an overview of the SPE method for performance prediction. It also contains a case study where SPE is applied on an existing system.
3

Extending Peass to Detect Performance Changes of Apache Tomcat

Rosenlund, Stefan 07 August 2023 (has links)
New application versions may contain source code changes that decrease the application’s performance. To ensure sufficient performance, it is necessary to identify these code changes. Peass is a performance analysis tool using performance measurements of unit tests to achieve that goal for Java applications. However, it can only be utilized for Java applications that are built using the tools Apache Maven or Gradle. This thesis provides a plugin for Peass that enables it to analyze applications built with Apache Ant. Peass utilizes the frameworks Kieker and KoPeMe to record the execution traces and measure the response times of unit tests. This results in the following tasks for the Peass-Ant plugin: (1) Add Kieker and KoPeMe as dependencies and (2) Execute transformed unit tests. For the first task, our plugin programmatically resolves the transitive dependencies of Kieker and KoPeMe and modifies the XML buildfiles of the application under test. For the second task, the plugin orchestrates the process that surrounds test execution—implementing performance optimizations for the analysis of applications with large codebases—and executes specific Ant commands that prepare and start test execution. To make our plugin work, we additionally improved Peass and Kieker. Therefore, we implemented three enhancements and identified twelve bugs. We evaluated the Peass-Ant plugin by conducting a case study on 200 commits of the open-source project Apache Tomcat. We detected 14 commits with 57 unit tests that contain performance changes. Our subsequent root cause analysis identified nine source code changes that we assigned to three clusters of source code changes known to cause performance changes.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work / Neue Versionen einer Applikation können Quelltextänderungen enthalten, die die Performance der Applikation verschlechtern. Um eine ausreichende Performance sicherzustellen, ist es notwendig, diese Quelltextänderungen zu identifizieren. Peass ist ein Performance-Analyse-Tool, das die Performance von Unit-Tests misst, um dieses Ziel für Java-Applikationen zu erreichen. Allerdings kann es nur für Java-Applikationen verwendet werden, die eines der Build-Tools Apache Maven oder Gradle nutzen. In dieser Arbeit wird ein Plugin für Peass entwickelt, das es ermöglicht, mit Peass Applikationen zu analysieren, die das Build-Tool Apache Ant nutzen. Peass verwendet die Frameworks Kieker und KoPeMe, um Ausführungs-Traces von Unit-Tests aufzuzeichnen und Antwortzeiten von Unit-Tests zu messen. Daraus resultieren folgende Aufgaben für das Peass-Ant-Plugin: (1) Kieker und KoPeMe als Abhängigkeiten hinzufügen und (2) Transformierte Unit-Tests ausführen. Für die erste Aufgabe löst das Plugin programmbasiert die transitiven Abhängigkeiten von Kieker und KoPeMe auf und modifiziert die XML-Build-Dateien der zu testenden Applikation. Für die zweite Aufgabe steuert das Plugin den Prozess, der die Testausführung umgibt, und führt spezielle Ant-Kommandos aus, die die Testausführung vorbereiten und starten. Dabei implementiert es Performanceoptimierungen, um auch Applikationen mit einer großen Codebasis analysieren zu können. Um die Lauffähigkeit des Plugins sicherzustellen, wurden zusätzlich Verbesserungen an Peass und Kieker vorgenommen. Dabei wurden drei Erweiterungen implementiert und zwölf Bugs identifiziert. Um das Peass-Ant-Plugin zu bewerten, wurde eine Fallstudie mit 200 Commits des Open-Source-Projekts Apache Tomcat durchgeführt. Dabei wurden 14 Commits mit 57 Unit-Tests erkannt, die Performanceänderungen enthalten. Unsere anschließende Ursachenanalyse identifizierte neun verursachende Quelltextänderungen. Diese wurden drei Clustern von Quelltextänderungen zugeordnet, von denen bekannt ist, dass sie eine Veränderung der Performance verursachen.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work

Page generated in 0.1139 seconds