Divergence Analysis with Affine Constraints
Résumé
The rise of graphics processing units in high-performance computing is bringing renewed interest in code optimization techniques that target SIMD processors. Many of these optimizations rely on divergence analyses, which classify variables as uniform, if they have the same value on every thread, or divergent, if they might not. This paper introduces a new kind of divergence analysis, that is able to represent variables as affine functions of thread identifiers. We have implemented our divergence analysis with affine constraints on top of Ocelot, an open source compiler, and use it to analyze a suite of 177 CUDA kernels from well-known benchmarks. These experiments show that our algorithm reports 4% less divergent variables than the previous state-of-the-art algorithm of Coutinho et al. Furthermore, we can mark about one fourth of all divergent variables as affine functions of thread identifiers. In addition to the novel divergence analysis, we also introduce the notion of a divergence aware register allocator. This allocator uses information from our analysis to either rematerialize affine variables, or to move uniform variables to shared memory. As a testimony of its effectiveness, our divergence aware allocator produces GPU code that is 29.70% faster than the code produced by Ocelot's register allocator.
Origine | Fichiers produits par l'(les) auteur(s) |
---|