Improving neural classification with Logical Prior Knowledge
Améliorer la classification neuronale avec de la connaisance logique a priori
Résumé
Neurosymbolic AI is a growing field of research aiming to combine neural networks learning capabilities with the reasoning abilities of symbolic systems. In this paper, we propose a new formalism for supervised multi-label classification informed by propositional prior knowledge. We introduce a new neurosymbolic technique called semantic conditioning at inference, which only constrains the system during inference while leaving the training unaffected. We discuss its theoritical and practical advantages over two other popular neurosymbolic techniques: semantic conditioning and semantic regularization. We develop a new multi-scale methodology to evaluate how the benefits of a neurosymbolic technique evolve with the scale of the network. We then evaluate experimentally and compare the benefits of all three techniques across model scales on several datasets. Our results demonstrate that semantic conditioning at inference can leverage prior knowledge to build more accurate neural-based systems compared to an uninformed system. We show that despite only working at inference, it retains a substantial portion of the benefits offered by semantic conditioning. Furthermore, we detail several use cases in which semantic conditioning at inference can be applied while semantic conditioning cannot.