Time your hedge with Deep Reinforcement Learning - Archive ouverte HAL Access content directly
Conference Papers Year : 2020

Time your hedge with Deep Reinforcement Learning

Abstract

Can an asset manager plan the optimal timing for her/his hedging strategies given market conditions? The standard approach based on Markowitz or other more or less sophisticated financial rules aims to find the best portfolio allocation thanks to forecasted expected returns and risk but fails to fully relate market conditions to hedging strategies decision. In contrast, Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions. In this paper, we present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging strategy. Our experiment for an augmented asset manager interested in sizing and timing his hedges shows that our approach achieves superior returns and lower risk.
Fichier principal
Vignette du fichier
main.pdf (569.32 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-02977533 , version 1 (25-10-2020)

Identifiers

  • HAL Id : hal-02977533 , version 1

Cite

Eric Benhamou, David Saltiel, Sandrine Ungari, Abhishek Mukhopadhyay. Time your hedge with Deep Reinforcement Learning. ICAPS Workshop on Planning for Financial Services (FinPlan 2020), Oct 2020, Online, France. ⟨hal-02977533⟩
72 View
153 Download

Share

Gmail Mastodon Facebook X LinkedIn More