Applications of Reinforcement Learning to Structured Prediction - Archive ouverte HAL
Communication Dans Un Congrès Année : 2008

Applications of Reinforcement Learning to Structured Prediction

Résumé

Supervised learning is about learning functions given a set of input and corresponding output examples. A recent trend in this field is to consider structured outputs such as sequences, trees or graphs. When predicting such structured data, learning models have to select solutions within very large discrete spaces. The combinatorial nature of this problem has recently led to learning models integrating a search component. In this paper, we show that Structured Prediction (SP) can be seen as a sequential decision problem. We introduce SP-MDP: a Markov Decision Process based formulation of Structured Prediction. Learning the optimal policy in SP-MDP is shown to be equivalent as solving the SP problem. This allows us to apply classical Reinforcement Learning (RL) algorithms to SP. We present experiments on two tasks. The first, sequence labeling, has been extensively studied and allows us to compare the RL approach with traditional SP methods. The second, tree transformation, is a challenging SP task with numerous large-scale real-world applications. We show successful results with general RL algorithms on this task on which traditional SP models fail.

Dates et versions

hal-01305074 , version 1 (20-04-2016)

Identifiants

Citer

Francis Maes, Ludovic Denoyer, Patrick Gallinari. Applications of Reinforcement Learning to Structured Prediction. European Workshop on Reinforcement Learning, Jun 2008, Villeneuve d’Ascq, France. pp.205-219, ⟨10.1007/978-3-540-89722-4_16⟩. ⟨hal-01305074⟩
71 Consultations
0 Téléchargements

Altmetric

Partager

More