Estimation of blood pressure waveform from facial video using a deep U-shaped network and the wavelet representation of imaging photoplethysmographic signals
Résumé
BACKGROUND. The remote measurement of physiological signals from video has gained a particular attention over the last past years. Estimating cardiovascular parameters like oxygen saturation and arterial blood pressure (BP) is covered by a limited volume of studies and remain a very challenging issue. Recent attempts demonstrated that BP can be estimated from facial video but under very controlled scenarios or with moderate performances. The data used in these works have not been publicly released or were gathered in a clinical setting. METHODS. We, in contrast, propose a framework for estimating BP from publicly available data in order to allow replication and to facilitate fair comparison. We developed and trained a deep U-shaped neural network to recover the blood pressure waveform from its imaging photoplethysmographic (iPPG) signal counterpart. The model predicts the continuous wavelet transform (CWT) representation of a BP signal from the CWT of an iPPG signal. Inverse CWT transform is ultimately computed to recover the BP time series. RESULTS. The proposed framework has been evaluated on 57 participants using international standards developed by the AAMI and the BHS. Results exhibit close agreement with ground truth BP values. The method satises all standards in the estimation of mean and diastolic BP (grade A) and nearly all standards in the estimation of systolic BP (grade B). CONCLUSIONS. This is, to the best of our knowledge, the rst demonstration of a deep learning-oriented framework that manages to Preprint submitted to Biomedical Signal Processing and Control predict the continuous blood pressure waveform from facial video analysis. Codes developed during the study are publicly available (https://github. com/frederic-bousefsaf/ippg2bp).
Origine | Fichiers produits par l'(les) auteur(s) |
---|