Fine granularity and adaptive cache update mechanism for client caching
Résumé
Distributed file systems have been commonly used as the back end to provide high-performance I/O services for hosting complicated data processing, such as database workloads. By buffering the frequently used data in the local file cache, which is called the client caching mechanism, can reduce the I/O latency and improve the performance of file access. However, the overhead and complexity for ensuring data consistency may offset the performance benefits caused by data caching. This paper proposes an adaptive, per-block cache update mechanism, which can ensure data consistency of block data in a distributed file system, with low synchronization overhead. Specifically, based on the recent history of read/write requests, the proposed scheme can adaptively select the best-suited consistency update policy for each data block at run time. The experiments on database applications show that our newly proposed mechanism can greatly reduce the overhead resulted by maintaining cache consistency, especially for the workloads having fluctuating read/write ratios. As a consequence, the system performance can be also enhanced