An interdisciplinary conceptual study of Artificial Intelligence (AI) for helping benefit-risk assessment practices
Résumé
We propose a comprehensive analysis of existing concepts of AI coming from different disciplines: Psychology and engineering tackle the notion of intelligence, while ethics and law intend to regulate AI innovations. The aim is to identify shared notions or discrepancies to consider for qualifying AI systems. Relevant concepts are integrated into a matrix intended to help defining more precisely when and how computing tools (programs or devices) may be qualified as AI while highlighting critical features to serve a specific technical, ethical and legal assessment of challenges in AI development. Some adaptations of existing notions of AI characteristics are proposed. The matrix is a risk-based conceptual model designed to allow an empirical, flexible and scalable qualification of AI technologies in the perspective of benefit-risk assessment practices, technological monitoring and regulatory compliance: it offers a structured reflection tool for stakeholders in AI development that are engaged in responsible research and innovation.
Origine | Fichiers produits par l'(les) auteur(s) |
---|