Computing Krylov iterates in the time of matrix multiplication
Résumé
Krylov methods rely on iterated matrix-vector products \(A^k u_j\) for an \(n\times n\) matrix \(A\) and vectors \(u_1,\ldots,u_m\). The space spanned by all iterates \(A^k u_j\) admits a particular basis --- the \emph{maximal Krylov basis} --- which consists of iterates of the first vector \(u_1, Au_1, A^2u_1,\ldots\), until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound \(O(n^\omega \log(n))\) comes from repeated squarings of \(A\) and logarithmically many Gaussian eliminations. Here \(\omega>2\) is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in \(O(n^\omega\log\log(n))\) field operations when \(m \in O(n)\), and even \(O(n^\omega)\) as soon as \(m\in O(n/\log(n)^c)\) for some fixed real \(c>0\). As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in \(O(n^\omega (\log\log(n))^2)\), and therefore matrix exponentiation~\(A^k\) can be performed in the latter complexity if \(\log(k) \in O(n^{\omega-1-\varepsilon})\) for some fixed $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for \(m\times m\) polynomial matrices of average degree \(n/m\), involving high-order lifting and minimal kernel bases.
Domaines
Calcul formel [cs.SC]Origine | Fichiers produits par l'(les) auteur(s) |
---|