Pre-training and fine-tuning dataset for transformers consisting of basic blocks and their execution times (average, minimum, and maximum) along with the execution context of these blocks, for various Cortex processors M7, M4, A53, and A72.
Résumé
We are making public the dataset used for training CAWET, a tool for estimating the Worst-Case Execution Time (WCET) of basic blocks using the Transformer XL model. CAWET leverages the Transformer architecture for accurate WCET predictions, and its training involves two main phases: self-supervised pre-training and fine-tuning.