GaVA-CLIP:Refining Multimodal Representations with Clinical Knowledge and Numerical Parameters for Gait Video Analysis in Neurodegenerative Diseases
Résumé
We present GaVA-CLIP, a knowledge augmentation strategy for Gait Video Analysis, designed to assess diagnostic groups and gait impairment. Based on the large-scale pretrained Vision Language Model, CLIP, GaVA-CLIP learns and enhances visual, textual, and numerical representations of patient gait videos through collective learning across three distinct modalities: gait videos, class-specific descriptions, and numerical gait parameters. Our specific contributions are two-fold: First, we adopt a knowledge-aware prompt tuning strategy to utilize classspecific medical descriptions in guiding text prompt learning. Second, we integrate paired gait parameters as numerical texts to enhance the numeracy of textual representations. Results demonstrate that GaVA-CLIP not only significantly outperforms state-of-the-art (SOTA) methods in video-based classification tasks but also adeptly decodes the learned class-specific text features into natural language descriptions using the vocabulary of quantitative gait parameters. The code and the model will be made available at our project page: https://lisqzqng.github.io/ GaitAnalysisVLM.
Origine | Fichiers produits par l'(les) auteur(s) |
---|