Optimal quadrature-sparsification for integral operator approximation
Résumé
We address the problem of designing sparse quadratures for the approximation of integral operators related to symmetric positive-semidefinite kernels. For a given kernel, we introduce the notion of squared-kernel discrepancy between two measures, consisting in the squared Hilbert-Schmidt norm of the difference between the integral operators defined from the underlying kernel and the two measures, the integral operators being viewed as operators on the underlying reproducing kernel Hilbert space. In the framework of integral operators defined from measures supported by a fixed set of points, sparsity of the approximate quadrature can be enforced through an l1-type penalisation, and the computation of a penalised squared-kernel-discrepancy-optimal Nyström approximation is thus turned into a convex quadratic minimisation problem. The penalisation can be introduced under the form of a regularisation term or of a constraint, both formulations being equivalent; by analogy with spectral truncation, we in particular propose to penalise the trace of the approximate operator. The quadratic programs related to the regularised and constrained squared-kernel-discrepancy minimisation problems can be interpreted as the Lagrange dual formulations of distorted one-class support-vector machines related to the squared-kernel and the initial discrete measure. Numerical strategies for solving large-scale squared-kernel discrepancy minimisation problems are investigated and the efficiency of the approach is illustrated on a series of examples. We in particular demonstrate the ability of the proposed methodology to lead to accurate approximations of the main eigenpairs of kernel-matrices related to large-scale datasets.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...