Testing Quality of Training in QoE-Aware SFC Orchestration Based on DRL Approach - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Testing Quality of Training in QoE-Aware SFC Orchestration Based on DRL Approach

Résumé

In this paper, we propose a Deep Reinforcement Learning (DRL) approach to optimize a learning policy for Service Function Chaining (SFC) orchestration based on maximizing Quality of Experience (QoE) while meeting Quality of Service (QoS) requirements in Software Defined Networking (SDN)/Network Functions Virtualization (NFV) environments. We adopt an incremental orchestration strategy suitable to online setting and enabling to investigate SFC orchestration by processing each incoming SFC request as a multi-step DRL problem. DRL implementation is achieved using Deep Q-Networks (DQNs) variant referred to as Double DQN. We particularly focus on evaluating performance and robustness of the DRL agent during training phase by investigating and testing the quality of training. In this regard, we define a testing metric monitoring the performance of the DRL agent and quantified by a QoE threshold score to reach on average during the last 100 runs of the training phase. We show through numerical results how DRL agent behaves during training phase and how it attempts to reach for different network scales a predefined average QoE threshold score. We highlight also network scalability effect on achieving a suitable performance-convergence trade-off.
Fichier non déposé

Dates et versions

hal-04271036 , version 1 (05-11-2023)

Identifiants

Citer

Mohamed Escheikh, Wiem Taktak, Kamel Barkaoui. Testing Quality of Training in QoE-Aware SFC Orchestration Based on DRL Approach. IFIP International Conference on Testing Software and Systems ICTSS 2023, Sep 2023, Bergamo University, Italy. pp.274-288, ⟨10.1007/978-3-031-43240-8_19⟩. ⟨hal-04271036⟩
43 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More