Spelling suggestions: "subject:"microbenchmarks"" "subject:"microbenchmark""
1 |
GPGPU microbenchmarking for irregular application optimizationWinans-Pruitt, Dalton R. 09 August 2022 (has links)
Irregular applications, such as unstructured mesh operations, do not easily map onto the typical GPU programming paradigms endorsed by GPU manufacturers, which mostly focus on maximizing concurrency for latency hiding. In this work, we show how alternative techniques focused on latency amortization can be used to control overall latency while requiring less concurrency. We used a custom-built microbenchmarking framework to test several GPU kernels and show how the GPU behaves under relevant workloads. We demonstrate that coalescing is not required for efficacious performance; an uncoalesced access pattern can achieve high bandwidth - even over 80% of the theoretical global memory bandwidth in certain circumstances. We also make other further observations on specific relevant behaviors of GPUs. We hope that this study opens the door for further investigation into techniques that can exploit latency amortization when latency hiding does not achieve sufficient performance.
|
2 |
Learning to Predict Software Performance Changes based on MicrobenchmarksDavid, Lucie 22 July 2024 (has links)
Detecting performance regressions early in the software development process is
paramount since performance bugs can lead to severe issues when introduced
into a productive system. However, it is impractical to run performance tests
with every committed code change due to their resource-intense nature. This
study investigates to what extent NLP methods specialized on source code
can effectively predict software performance regressions by utilizing source
code obtained through line coverage information from microbenchmark exe-
cutions. Contributing to the overarching goal of supporting test case selection
and thereby increasing efficiency of performance benchmarking, we evaluate
several models at different levels of complexity ranging from a simple logistic
regression classifier to Transformers. Our results show that all implemented
models exhibit challenges in accurately predicting regression-introducing code
changes and that simple ML classifiers employing a Bag-of-Words encoding
reach similar predictive performance as a BERT-based Transformer model.
We further employed a statistical n-gram model to examine if the 'natural-
ness' of source code can serve as reliable indicator for software performance
regressions and concluded that the approach is not applicable to the data set
at hand. This further underlines the challenge of effectively predicting perfor-
mance based on source code and puts into question whether the current quality
and quantity of available data is sufficient in order to render an NLP-based
machine learning approach on regression detection suitable.
|
Page generated in 0.0683 seconds