A Convolution-Assisted Vision Transformer for the Classification of Pancreatic Ductal Adenocarcinoma
Résumé
Many works that require the classification of whole slide images (WSIs) primarily involve classifying patches extracted from the WSIs. The classes from the patches are extrapolated as the class of the WSI. Pathologists, however, consider the surrounding region of a patch before classifying it. We show that this can be done by a combination of a self-supervised CNN feature extractor and a vision transformer. We also study how well an ImageNet feature extractor compares to a self-supervised feature extractor trained on selected medical images. Conducted experiments showed patch-level class prediction accuracy of 93.69% with an area under the curve (AUC) of 0.76 on the self-supervised feature extractor, and 97.35% accuracy with an AUC of 0.77 on the ImageNet feature extractor. On a WSI-level prediction, an accuracy of 73.68% when using the self-supervised feature extractor, and 36.84% when using the ImageNet feature extractor.