The generalized minimal-residual method (GMRES) was introduced by Saad and Schultz in 1986 and to this day it is one of the more popular iterative algorithms for solving large linear systems; it is very versatile (works on general, i.e. non-Hermitian systems) and has a relatively simple derivation, which I will try to convey in this post. In the following, I will restrict the notation to the special case in which all vectors have real components, because the complex-valued case just involves exchanging ℝ with ℂ.
At each iteration, GMRES provides an increasingly more accurate approximation x̃ to the solution of a linear system Ax = b, in the sense of the L2 norm (i.e. a “minimal-residual” solution):
$$ \tilde{x} = \underset{x}{\arg \min} \| A x - b \|, $$
where A ∈ ℝm × n is (assumed to be) of full rank (i.e. invertible), x ∈ ℝn and b ∈ ℝm.
The approximate solution is sought in spaces of increasing dimensionality; in other words, at iteration i ∈ 1..m the solution x̃i will be a vector in ℝi.
This means that we need a series of vector bases (“Krylov subspaces”) that map from ℝi to ℝm; the Arnoldi process provides these as by-products.
At iteration i, the Krylov subspace approximation y to a vector x ∈ ℝn can be written as x = Qiy, in which the Q matrices satisfy the Arnoldi recursion AQi = Qi + 1Hi. Recall that Qi is shorthand for a set of i orthonormal vectors in ℝn (i.e. that appear as the matrix columns).
We can now plug in the Krylov approximation and the Arnoldi identity into the residual equation to re-formulate the minimal residual problem:
∥Ax − b∥ = ∥AQiy − b∥ = ∥Qi + 1Hiy − b∥
Since the columns of the Qi matrices are orthogonal, if Qiv = w then v = Qi†w; this lets us rewrite the residual as follows:
∥Hiy − Qi + 1†b∥
We notice that Qi + 1†b rotates back the b vector to the canonical basis, which can be written as ∥b∥e1 = : b0.
At this point we focus on the Hessenberg matrix Hi (specifically this is an upper-Hessenberg matrix, because all the elements below the first sub-diagonal are identically zero). This is convenient from a computational point of view, as we will see shortly.
Let us denote the QR factorization of Hi as OiUi (where Oi is orthonormal and Ui is upper triangular). Since the QR factorization operates on the sub-diagonal nonzero elements of its input matrix, applying it to a Hessenberg matrix means that only at most i − 2 rotations will be required in order to produce its answer.
At this point we are ready to write the GMRES problem in its final form
$$ \|A x - b\| = \left\|U_i y - O_i^\dagger b_0 \right\| \underset{y}{\rightarrow} min $$
which can be efficiently solved by back-substitution since Ui is upper triangular.
Conclusions and some optimizations
To recap, at step i the GMRES algorithm requires:
applying the Arnoldi process to obtain a Krylov base for {A, b} in ℝi,
factorizing the Hessenberg matrix obtained at step 1) in its QR factors,
solving for the minimal-residual vector y using a triangular solver (i.e. back-substitution),
retrieving the full-space solution using the Krylov relation x̃ = Qiy.
The version of GMRES I have implemented in sparse-linear-algebra does not carry out steps 1-4 repeatedly, but simply finds the “largest” Krylov subspace (i.e. carries out Arnoldi until it breaks down, that is, until the norm of the ith orthogonal vector is found to be almost 0) and then proceeds with steps 2-4. I found this simpler solution to work quite well in practice.
A popular optimization is to interleave the Arnoldi process with the QR factorization; in particular, the first i columns of Hi + 1 are exactly Hi, so computing the QR factorization of Hi + 1 only requires applying one additional Givens’ rotation, and can recycle the QR factors of Hi (appropriately padded).
References
Y. Saad and M.H. Schultz, “GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems”, SIAM J. Sci. Stat. Comput., 7:856-869, 1986.