Towards the improvement of textual anatomy image classification using image local features
Résumé
Image classification methods based on text utilize terms extracted from image annotations (image caption, image-related article, etc.) to achieve classification. For images involving different anatomical structures (chest, spine, etc.), however, the precision of pure textual classification often suffers from highly complex text contents (e.g. text terms extracted out of two MR abdomen images may be quite different from each other: terms from one image may concerns gastroenteritis while the other contains terms involving hysteromyoma). This paper tackles the anatomy image classification problem using a hybrid approach. First, a mutual information (MI) based filter is applied to select a set of terms with top MI scores for each anatomical class and help reduce the noise existing in the raw text. Second, local features extracted from the images are transformed as visual descriptors. Last, a hybrid scheme on the results from the textual and visual methods is applied to achieved further improvement of the classification results. Experiments show that this hybrid scheme improves the results over the sole textual or visual method on different anatomical class settings.