Stabilizing Agent's Interactions in Dynamic Contexts
Résumé
We address the problem of efficient coordination protocols in the contexts where mobile and ad-hoc devices which harbor the selfish agents must achieve a set of dynamic tasks. This work assumes that, due to the dynamic behaviors of the agents induced by the unpredictable availability of these devices and the dynamic of the tasks, it is not possible to devise an efficient coordination which uses prior knowledge about the information of the agents ahead of task achievements. In these contexts, we provide both protocols called depth exploration protocol and width exploration protocol which are based on the formalism of the MDP (Markov Decision Process) and on alliance principle. The aim of our protocols is to ensure and to adapt dynamically the stability of the agent's coordination teams (coalitions) which take into account the agent's withdrawal and the dynamic evolving of the tasks. We develop a theoretical study of our mechanism and we provide an analytical and experimental performance evaluation.