Revisiting column-generation-based matheuristic for learning classification trees
Résumé
Decision trees are highly interpretable models for solving classification problems in machine learning (ML). The standard ML algorithms for training decision trees are fast but generate suboptimal trees in terms of accuracy. Other discrete optimization models in the literature address the optimality problem but only work well on relatively small datasets. \cite{firat2020column} proposed a column-generation-based heuristic approach for learning decision trees. This approach improves scalability and can work with large datasets. In this paper, we describe improvements to this column generation approach. First, we modify the subproblem model to significantly reduce the number of subproblems in multiclass classification instances. Next, we show that the data-dependent constraints in the master problem are implied, and use them as cutting planes. Furthermore, we describe a separation model to generate data points for which the linear programming relaxation solution violates their corresponding constraints. We conclude by presenting computational results that show that these modifications result in better scalability.
Mots clés
Machine Learning Decision trees Column Generation Classification Mixed Integer Programming
Machine Learning
Decision trees
Column Generation
Classification
Mixed Integer Programming
Machine Learning (cs.LG)
Artificial Intelligence (cs.AI)
Optimization and Control (math.OC)
FOS: Computer and information sciences
FOS: Mathematics
Domaines
Apprentissage [cs.LG]Licence |
---|