Polynomial data compression for large-scale physics experiments - Archive ouverte HAL
Article Dans Une Revue Computing and Software for Big Science Année : 2018

Polynomial data compression for large-scale physics experiments

Résumé

The next generation of research experiments will introduce a huge data surge to continuously increasing data production by current experiments. This data surge necessitates efficient compression techniques. These compression techniques must guarantee an optimum trade-off between compression rate and the corresponding ratio of compression to decompression speed without affecting the data integrity. This work presents a lossless compression algorithm to compress physics data generated by astronomy, astrophysics and particle physics experiments. The developed algorithms have been tuned and tested on a real-use case: the next-generation, ground-based, high-energy gamma ray observatory, Cherenkov Telescope Array, requiring important compression performance. As a stand-alone method, the proposed compression method is very fast and reasonably efficient. Alternatively, applied as a pre-compression algorithm, it can accelerate common methods like the Lempel–Ziv–Markov chain algorithm (LZMA), keeping close performance.
Fichier principal
Vignette du fichier
1805.01844 (623.51 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-01960337 , version 1 (08-12-2023)

Identifiants

Citer

Pierre Aubert, Thomas Vuillaume, Gilles Maurin, Jean Jacquemier, Giovanni Lamanna, et al.. Polynomial data compression for large-scale physics experiments. Computing and Software for Big Science, 2018, 2 (1), pp.6. ⟨10.1007/s41781-018-0010-3⟩. ⟨hal-01960337⟩
86 Consultations
38 Téléchargements

Altmetric

Partager

More