Pre-training and fine-tuning dataset for transformers consisting of basic blocks and their execution times (average, minimum, and maximum) along with the execution context of these blocks, for various Cortex processors M7, M4, A53, and A72. - Archive ouverte HAL
Autre Publication Scientifique Année : 2023

Pre-training and fine-tuning dataset for transformers consisting of basic blocks and their execution times (average, minimum, and maximum) along with the execution context of these blocks, for various Cortex processors M7, M4, A53, and A72.

Résumé

We are making public the dataset used for training CAWET, a tool for estimating the Worst-Case Execution Time (WCET) of basic blocks using the Transformer XL model. CAWET leverages the Transformer architecture for accurate WCET predictions, and its training involves two main phases: self-supervised pre-training and fine-tuning.
Fichier non déposé

Dates et versions

hal-04769606 , version 1 (06-11-2024)

Identifiants

Citer

Abderaouf Nassim Amalou, Isabelle Puaut, Elisa Fromont. Pre-training and fine-tuning dataset for transformers consisting of basic blocks and their execution times (average, minimum, and maximum) along with the execution context of these blocks, for various Cortex processors M7, M4, A53, and A72.. 2023, ⟨10.5281/zenodo.10043908⟩. ⟨hal-04769606⟩
6 Consultations
0 Téléchargements

Altmetric

Partager

More