Principles of user-centered online reinforcement learning for the emergence of service compositions - Archive ouverte HAL
Rapport (Rapport De Recherche) Année : 2019

Principles of user-centered online reinforcement learning for the emergence of service compositions

Résumé

Cyber-physical and ambient systems surround the human user with services at her/his disposal. These services, which are more or less complex, must be as tailored as possible to her/his preferences and the current situation. We propose to build them automatically and on the fly by composition of more elementary services present at the time in the environment, without prior expression of the user's needs nor specification of a process or a composition model. In a context of high dynamic variability of both the ambient environment and the needs, the user must be involved at the minimum. In order to produce the knowledge necessary for automatic composition in the absence of an initial guideline, we have developed a generic solution based on online reinforcement learning. % at infinite horizon. It is decentralized within a multi-agent system in charge of the administration and composition of the services, which learns incrementally from and for the user. Thus, our architecture puts the user in the loop. It relies on an interaction protocol between agents that supports service discovery and selection in an open and unstable environment.
Fichier principal
Vignette du fichier
IRIT_RR_2019_05_FR.pdf (1.13 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02976638 , version 1 (23-01-2023)

Identifiants

  • HAL Id : hal-02976638 , version 1

Citer

Walid Younes, Sylvie Trouilhet, Françoise Adreit, Jean-Paul Arcangeli. Principles of user-centered online reinforcement learning for the emergence of service compositions. [Research Report] IRIT/RR–2019–05–FR, IRIT : Institut de Recherche Informatique de Toulouse. 2019. ⟨hal-02976638⟩
109 Consultations
32 Téléchargements

Partager

More