Given two n×n matrices A and A0 and a sequence of subspaces {0} = V0 ⊂
··· ⊂ Vn = R
n with dim(Vk
) = k, the k-th subspace-projected approximated matrix Ak
is defined as Ak = A + Πk
(A0 − A)Πk
, where Πk
is the orthogonal projection on V
⊥
k
.
Consequently, Ak v = Av and v
∗Ak = v
∗A for all v ∈ Vk
. Thus (Ak
)
n
k≥0
is a sequence
of matrices that gradually changes from A0
into An = A. In principle, the definition of
Vk+1 may depend on properties of Ak
, which can be exploited to try to force Ak+1
to be
closer to A in some specific sense. By choosing A0 as a simple approximation of A, this
turns the subspace-approximated matrices into interesting preconditioners for linear
algebra problems involving A. In the context of eigenvalue problems, they appeared
in this role in Shepard et al. (2001), resulting in their Subspace Projected Approximate
Matrix method. In this article, we investigate their use in solving linear systems of
equations Ax = b. In particular, we seek conditions under which the solutions xk of
the approximate systems Ak xk = b are computable at low computational cost, so the
efficiency of the corresponding method is competitive with existing methods such as the
Conjugate Gradient and the Minimal Residual methods. We also consider how well the
sequence (xk
)k≥0 approximates x, by performing some illustrative numerical tests.