Reproducibility and Accuracy for High-Performance Computing
Résumé
On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, especially reductions, may become non-deterministic and, therefore, non-reproducible mainly due to the non-associativity of floating-point operations. We introduce an approach to compute the correctly rounded sums of large floating-point vectors accurately and efficiently, achieving deterministic results by construction. Our multi-level algorithm consists of two main stages: a filtering stage that relies on fast vectorized floating-point expansions, and an accumulation stage based on superaccumulators in a high-radix carry-save representation. We extend this approach to dot product and matrix-matrix multiplication. In this talk, I will present the reproducible and accurate (rounding to the nearest) algorithms for summation, dot product, and matrix-matrix multiplication as well as their implementations in parallel environments such as Intel server CPUs, Intel Xeon Phi, and both NVIDIA and AMD GPUs. I will show that the performance of our algorithms is comparable with the standard implementations.
Origine | Fichiers produits par l'(les) auteur(s) |
---|