Principles of user-centered online reinforcement learning for the emergence of service compositions
Résumé
Cyber-physical and ambient systems surround the human user with services at her/his disposal. These services, which are more or less complex, must be as tailored as possible to her/his preferences and the current situation. We propose to build them automatically and on the fly by composition of more elementary services present at the time in the environment, without prior expression of the user's needs nor specification of a process or a composition model. In a context of high dynamic variability of both the ambient environment and the needs, the user must be involved at the minimum. In order to produce the knowledge necessary for automatic composition in the absence of an initial guideline, we have developed a generic solution based on online reinforcement learning. % at infinite horizon. It is decentralized within a multi-agent system in charge of the administration and composition of the services, which learns incrementally from and for the user. Thus, our architecture puts the user in the loop. It relies on an interaction protocol between agents that supports service discovery and selection in an open and unstable environment.
Origine | Fichiers produits par l'(les) auteur(s) |
---|