• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

LEVERAGING MACHINE LEARNING FOR FAST PERFORMANCE PREDICTION FOR INDUSTRIAL SYSTEMS : Data-Driven Cache Simulator

Yaghoobi, Sharifeh January 2024 (has links)
This thesis presents a novel solution for CPU architecture simulation with a primary focus on cache miss prediction using machine learning techniques. The solution consists of two main components: a configurable application designed to generate detailed execution traces via DynamoRIO and a machine learning model, specifically a Long Short-Term Memory (LSTM) network, developed to predict cache behaviors based on these traces. The LSTM model was trained and validated using a comprehensive dataset derived from detailed trace analysis, which included various parameters like instruction sequences and memory access patterns. The model was tested against unseen datasets to evaluate its predictive accuracy and robustness. These tests were critical in demonstrating the model’s effectiveness in real-world scenarios, showing it could reliably predict cache misses with significant accuracy. This validation underscores the viability of machine learning-based methods in enhancing the fidelity of CPU architecture simulations. However, performance tests comparing the LSTM model and DynamoRIO revealed that while the LSTM achieves satisfactory accuracy, it does so at the cost of increased processing time. Specifically, the LSTM model processed 25 million instructions in 45 seconds, compared to DynamoRIO’s 41 seconds, with additional overheads for loading and executing the inference process. This highlights a critical trade-off between accuracy and simulation speed, suggesting areas for further optimization and efficiency improvements in future work.

Page generated in 0.0691 seconds