vPIM: Processing-in-Memory Virtualization
Résumé
Data movement is the leading cause of performance degradation and energy consumption in modern data centers. Processing inmemory (PIM) is an architecture that addresses data movement by bringing computation inside the memory chips. This paper is the first to study the virtualization of PIM devices by designing and implementing vPIM, an open-source UPMEM-based virtualization system for the cloud. Our vPIM design considers four requirements: Compatibility such that no hardware and no hypervisor changes are needed; Multiplexing and isolation for a higher utilization ratio; Utilizability and transparency such that applications written for PIM can be efficiently run out-of-the-box, leading to rapid adoption; Minimalization of virtualization performance overhead.
We prototype vPIM in Firecracker, expanding the virtio standard. Our experimental evaluation uses 16 applications provided by PrIM, a recent PIM benchmark suite. The virtualization overhead is between 1.01× and 2.07× for untouched PrIM applications. To keep overhead low, vPIM introduces several optimizations: zero-copy from guest OS to Firecracker, efficient virtio queues management, efficient Guest Physical Address to Host Virtual Address translation, parallel processing on multiple ranks, automatic data batching and pre-fetching, and the reimplementation of some specific functionalities in C instead of Rust. We hope this work will lay the foundation for future research on PIM for cloud computing.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |
Copyright (Tous droits réservés)
|