Interior Point Methods for Supervised Training of Artificial Neural Networks with Bounded Weights
Résumé
We investigate and demonstrate the benefits of applying interior point methods (IPM) in supervised learning of artificial neural networks. Specifically, three IPM algorithms are presented in this paper: a deterministic logarithmic barrier (LB), a stochastic logarithmic barrier function (SB) and a quadratic trust region method respectively. Those are applied to the training of supervised feedforward artificial neural networks. We consider neural network training as a nonlinear constrained optimization problem. Specifically, we put constraints on the weights to avoid network paralysis. In the case of the (LB) method, the search direction is derived using a recursive prediction error method (RPEM) that approximates the inverse of the Hessian of a logarithmic error function iteratively. The weights move on a center trajectory in the interior of the feasible weight space and have good convergence properties. For its stochastic version, at each iteration a stochastic optimization procedure is used to add random fluctuations to the RPEM direction in order to escape local minima. This optimization technique can be viewed as a hybrid of the barrier function method and simulated annealing procedure. In the third algorithm, we approximate the objective function by a quadratic convex function and use a trust region method to find the optimal weights. Computational experiments in approximation of discrete dynamical systems and medical diagnosis problems are also provided.