Context-memory Aware Mapping for Energy Efficient Acceleration with CGRAs
Résumé
Coarse Grained Reconfigurable Arrays (CGRAs) are emerging as low power computing alternative providing a high grade of acceleration. However, the area and energy efficiency of these devices are bottlenecked by the configura-tion/context memory when they are made autonomous and loosely coupled with CPUs. The size of these context memories is of prime importance due to their high area and impact on the power consumption. For instance, a 64-word context memory typically represents 40% of a processing element area. In this context, since traditional mapping approaches do not take the size of the context memory into account, CGRAs often become oversized which strongly degrade their performance and interest. In this paper, we propose a context memory aware mapping for CGRAs to achieve better area and energy efficiency. This paper motivates the need of constraining the size of the context memory inside the processing element (PE) for ultra low power acceleration. It also describes the mapping approach which tries to find at least one mapping solution for a given set of constraints defined by the context memories of the PEs. Experiments show that our proposed solution achieves an average of 2.3× energy gain (with a maximum of 3.1× and a minimum of 1.4×) compared to the mapping approach without the memory constraints, while using 2× less context memory. When compared to the CPU, the proposed mapping achieves an average of 14× (with a maximum of 23× and minimum of 5×) energy gain.
Domaines
Architectures Matérielles [cs.AR]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...