Generate and Update Large HDT RDF Knowledge Graphs on Commodity Hardware
Résumé
HDT is a popular compressed file format to store, share and query large RDF Knowledge Graphs (KGs). While all these operations are possible in low hardware settings (i.e. a standard laptop), the generation and updates of HDT files come with an important hardware cost especially in terms of memory and disk usage. In this paper, we present a new tool leveraging HDT, namely k-HDTDiffCat, that allows to a) reduce the memory and disk footprint for the creation of HDT files and to b) remove triples from an existing HDT file thus allowing updates. We show that in a system with 8 times less memory, we can achieve HDT file generation in almost the same time as existing methods. Moreover, our system allows to remove triples from an HDT file catering for updates. This operation is possible without the need to uncompress the original data (as it was the case in the original HDT file) and by keeping low memory consumption. While HDT was suited for storing, exchanging and querying large Knowledge Graphs in low hardware settings, we also offer the novel functionality to generate and update HDT files in these settings. As a side effect, HDT becomes an ideal indexing structure for large KGs in low hardware settings making them more accessible to the community. In particular, we show that we can compress the whole Wikidata graph, which is the largest knowledge graph currently available, on a standard laptop with 16 GB of RAM as well as generate Wikidata indexes that are at most 24 hours behind the live Wikidata endpoint.
Domaines
Base de données [cs.DB]Origine | Fichiers produits par l'(les) auteur(s) |
---|