East Asian J. Appl. Math., 3 (2013), pp. 120-137.
Published online: 2018-02
Cited by
- BibTex
- RIS
- TXT
Given two $n×n$ matrices $A$ and $A_0$ and a sequence of subspaces $\{0\}=\mathscr{V}_0⊂···⊂\mathscr{V}_n=\mathbb{R}^n$ with dim$(\mathscr{V}_k)=\mathscr{k}$, the $k$-th subspace-projected approximated matrix $A_k$ is defined as $A_k=A+Π_k(A_0−A)Π_k$, where $Π_k$ is the orthogonal projection on $\mathscr{V}_{k}^⊥$. Consequently, $A_{k}v=Av$ and $v^{∗}A_{k}=v^{∗}A$ for all $v∈\mathscr{V}_{k}$. Thus $(A_{k})^{n}_{k≥0}$ is a sequence of matrices that gradually changes from $A_0$ into $A_n=A$. In principle, the definition of $\mathscr{V}_{k+1}$ may depend on properties of $A_k$, which can be exploited to try to force $A_{k+1}$ to be closer to $A$ in some specific sense. By choosing $A_0$ as a simple approximation of $A$, this turns the subspace-approximated matrices into interesting preconditioners for linear algebra problems involving $A$. In the context of eigenvalue problems, they appeared in this role in Shepard et al. (2001), resulting in their Subspace Projected Approximate Matrix method. In this article, we investigate their use in solving linear systems of equations $Ax=b$. In particular, we seek conditions under which the solutions $x_k$ of the approximate systems $A_kx_k=b$ are computable at low computational cost, so the efficiency of the corresponding method is competitive with existing methods such as the Conjugate Gradient and the Minimal Residual methods. We also consider how well the sequence $(x_k)_{k≥0}$ approximates $x$, by performing some illustrative numerical tests.
}, issn = {2079-7370}, doi = {https://doi.org/10.4208/eajam.070213.280513a}, url = {http://global-sci.org/intro/article_detail/eajam/10851.html} }Given two $n×n$ matrices $A$ and $A_0$ and a sequence of subspaces $\{0\}=\mathscr{V}_0⊂···⊂\mathscr{V}_n=\mathbb{R}^n$ with dim$(\mathscr{V}_k)=\mathscr{k}$, the $k$-th subspace-projected approximated matrix $A_k$ is defined as $A_k=A+Π_k(A_0−A)Π_k$, where $Π_k$ is the orthogonal projection on $\mathscr{V}_{k}^⊥$. Consequently, $A_{k}v=Av$ and $v^{∗}A_{k}=v^{∗}A$ for all $v∈\mathscr{V}_{k}$. Thus $(A_{k})^{n}_{k≥0}$ is a sequence of matrices that gradually changes from $A_0$ into $A_n=A$. In principle, the definition of $\mathscr{V}_{k+1}$ may depend on properties of $A_k$, which can be exploited to try to force $A_{k+1}$ to be closer to $A$ in some specific sense. By choosing $A_0$ as a simple approximation of $A$, this turns the subspace-approximated matrices into interesting preconditioners for linear algebra problems involving $A$. In the context of eigenvalue problems, they appeared in this role in Shepard et al. (2001), resulting in their Subspace Projected Approximate Matrix method. In this article, we investigate their use in solving linear systems of equations $Ax=b$. In particular, we seek conditions under which the solutions $x_k$ of the approximate systems $A_kx_k=b$ are computable at low computational cost, so the efficiency of the corresponding method is competitive with existing methods such as the Conjugate Gradient and the Minimal Residual methods. We also consider how well the sequence $(x_k)_{k≥0}$ approximates $x$, by performing some illustrative numerical tests.