Multimodal Information Retrieval to Assist Diabetic Retinopathy Diagnosis
Résumé
Purpose: we propose to use different Case Based Reasoning (CBR) methods to retrieve diabetic patient files: should an ophthalmologist have doubts on his diagnosis, he can send the available data about his patient to the system, that selects the most similar cases, along with their medical interpretations, in order to assist his diagnosis. Methods: the system is applied to a diabetic retinopathy (DR) database, consisting of 67 patient files. Each patient file consists of up to 20 retinal images and up to 11 information fields about the patient's medical history; each patient has been graded according to its disease severity level (ICDRS classification). The objective is to retrieve patient files with the same grade. Information about each patient is incomplete and heterogeneous (there are both digital images and text information), so we proposed three different retrieval methods suitable to manage both issues: the first one is based on decision trees, the second one on a Bayesian network, and the third one on the DSmT theory. Images are characterized by their digital content: a feature vector is extracted from their wavelet transform to describe their textural content, and microaneurysms are detected. The retrieval methods are assessed by the mean precision at 5, i.e. the mean percentage of cases relevant for a query, among the topmost 5 results. Results: a mean precision at 5 of 81.0%, 70.4% and 81.8% was achieved with these three methods, respectively. The scores range from 58.9% for the less frequent grade to 87.4% for the most frequent; they range from 42.1% for sparse cases (10% of available data) to 91.2% for comprehensive cases. Conclusions: retrieving entire patient files, we can achieve a significantly higher precision than simply retrieving similar images (a mean precision at 5 of 46.1% is achieved in the latter case). The proposed multimodal retrieval methods are precise enough to be useful in a DR diagnosis aid system.