Reinforcement learning algorithms in an online EVCS problem with photovoltaic panels
Résumé
We consider an electric vehicle charging station equipped with k charging points and two energy sources: an electrical grid and photovoltaic panels. We consider scenarios with random EV arrivals and departures within a day. On arrival, the EV driver submits a charging request by providing its parking duration. A charging request can either be accepted or rejected. If it is accepted, the state of charge (SoC) has to reach 80\% before departure. Otherwise, a penalty cost is applied. The electrical grid can provide limited power to the CS. Also, charging points deliver limited power at fixed rates (i.e., an on/off system) or variable rates ranging from 0 to a given maximum value. The main objective is to minimize the cost of energy supplied by the EG while ensuring that EVs reach the desired SoC. All information about future charging requests, electricity prices, and solar production is unknown beforehand. Therefore, reinforcement learning (RL) can be more suitable than other optimization algorithms. Furthermore, as RL algorithms aim at maximizing the cumulative reward that contains future reward, future charging requests are implicitly taken into account. This work is based on an open-source environment called Chargym, which simulates the charging and discharging of EVs in a CS that integrates a photovoltaic generation system. In this work, we provide comparisons of various hyperparameters of these methods.