*Shinya Miyajima (Iwate University)
In this talk, we are concerned with the accuracy of computed eigenvalues and eigenvectors in the eigenvalue problem $$ Ax = \lambda x, \qquad A \in \mathbb{R}^{n \times n}, \quad \lambda \in \mathbb{R}, \quad x \in \mathbb{R}^{n}\setminus\{0\}, $$ where $A$ is symmetric, $\lambda$ is an eigenvalue, and $x$ is an eigenvector corresponding to $\lambda$. Let $k \le n$, and $P \in \mathbb{R}^{n \times k}$ and $B \in \mathbb{R}^{k \times k}$ satisfy $AP = PB$, where $B$ is not necessarily diagonal. The eigenvalues of $B$ are those of $A$, and ${\rm span}(P)$ is called the invariant subspace of $A$ corresponding to these eigenvalues. The eigenvectors of $A$ corresponding to these eigenvalues are included in this subspace. $\quad$ We consider computing verified error bounds for all numerically obtained (approximate) eigenvalues and eigenvectors, in which all the possible rounding errors have been taken into account. In 2006, Miyajima et al. proposed such an algorithm, which involves only four floating-point matrix multiplications for computing all the error bounds. When the eigenvalues are closely clustered, this algorithm does not give error bounds for approximate eigenvectors and/or basis of invariant subspaces. Recently, Rump and Lange proposed an algorithm for this purpose. This algorithm gives error bounds for approximate eigenvectors when the corresponding eigenvalues are well-separated, and provide error bounds for approximate basis of invariant subspaces when the eigenvalues are closely clustered. $\quad$ This talk has two purposes. The first purpose is to present theories for computing error bounds for approximate basis of invariant subspaces when the eigenvalues are closely clustered. The second purpose is to propose an algorithm for computing error bounds for all approximate eigenvalues, and approximate eigenvectors or basis of invariant subspaces. We develop this algorithm by combining the algorithm by Miyajima et al. and presented theories. Particular emphasis is put on the computational cost of the proposed algorithm. Additional procedures requiring cubic complexity are unnecessary for computing error bounds for approximate basis of invariant subspaces. As a consequence, the proposed algorithm also involves only four floating-point matrix multiplications and does not involve other procedures requiring cubic complexity for computing all the error bounds.
Math formula preview: