Sharing (musical) time between machines and humans: simultaneity, succession and duration in real-time computer-human musical interaction
Résumé
The functional approach to AI has focused on the ability to provide a computer the cognitive
capabilities usually attributed to humans: translating a text, recognizing an object in an image,
playing chess, planning a route, etc. Even if perception and actions have been largely considered,
cognitive capabilities related to the apprehension and the organization of time have been
less studied. Music is a prime area to address these issues. In all written forms of music,
the act of music composition is a choreography of events and expectations in time to allow
sophisticated continuous interactions among musicians. This powerful effect is the result of
intrinsic combination of strong language formalisms (for authoring music) and performance
mechanisms that allow synchrony, real-time coordination of actions and robustness of expected
results in ensemble music.
Bringing such capabilities to computers and providing them with the ability to take part in
musical interactions with human musicians is an excellent workbench to investigate and to test,
from an experimental viewpoint, several temporal notions.