A global description of medical imaging with high precision
Résumé
This paper explores our solution aiming to provide efficient retrieval of medical imaging. Depending on the user, the same image can be described through different views. In essence, an image can be described on the basis of either low-level properties, such as texture or color; contextual data, such as date of acquisition or author; or semantic content, such as real-world objects and relations. Our approach consists in providing a multi-spaced description model capable of integrating different facets (or views) of the medical image. Few proposed solutions take into consideration the necessity of a high expressive power for medical imaging description and the heterogeneity of user competence (physician, researcher, student, etc.). For instance, the spatial content in terms of relationships in surgical or radiation therapy of brain tumors is really decisive because the location of a tumor has profound implications for a therapeutic decision. Visual retrieval solutions are recommended and are the most appropriated for non computer-scientist users. However, current visual languages suffer from several problems, especially ambiguities generated by the user and/or the system, and imprecision at different levels of image description. In this paper, we expose our solution and demonstrate how spatial precision of medical image content and ambiguities can be resolved. An implementation called MIMS (Medical Image Management System) has been realized to prove our proposition. A set of tests has been deployed to validate our prototype.