Learning over Multitask Graphs—Part II: Performance Analysis - Archive ouverte HAL
Article Dans Une Revue IEEE Open Journal of Signal Processing Année : 2020

Learning over Multitask Graphs—Part II: Performance Analysis

Résumé

Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.
Fichier principal
Vignette du fichier
Learning_Over_Multitask_GraphsPart_II_Performance_Analysis.pdf (2.2 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03347217 , version 1 (17-09-2021)

Identifiants

Citer

Roula Nassif, Stefan Vlaski, Cédric Richard, Ali H Sayed. Learning over Multitask Graphs—Part II: Performance Analysis. IEEE Open Journal of Signal Processing, 2020, 1, pp.46 - 63. ⟨10.1109/ojsp.2020.2989031⟩. ⟨hal-03347217⟩
36 Consultations
32 Téléchargements

Altmetric

Partager

More