Franz J. Vesely > CompPhys Tutorial > 2 Linear Algebra
 
 





 
< >
Ch. 2 Sec. 2




2.3 Relaxation Methods


Solve $ A \cdot x \, = \, b $   by iteration:



2.3.1 Iterative Improvement

Let $x$ be the exact solution of $ A \cdot x= b$,
and let $ x'$ be an inaccurate (or estimated) solution vector, such that $ x \equiv x' + \delta x $.

Inserting this into the given equation we find $ A \cdot \delta x = b - A \cdot x' \equiv c $
which may be solved for $\delta x$. (Use double precision!)

EXAMPLE:

Let
A= \left( \begin{array}{cc}1&2\\ \\3&4\end{array} \right) , \; \; b= \left( \begin{array}{r} 3 \\ \\2 \end{array} \right) and \; \; x'= \left( \begin{array}{r} -3 \\ \\4 \end{array} \right)


From
A \cdot \delta x = \left( \begin{array}{r} 3 \\ \\2 \end{array} \right) - \left( \begin{array}{cc}1&2\\ \\3&4\end{array} \right) \cdot \left( \begin{array}{r} -3 \\ \\4 \end{array} \right) = \left( \begin{array}{r} -2 \\ \\-5 \end{array} \right)

we find, using the decomposition
L= \left( \begin{array}{cc}1&0\\ \\3&1\end{array} \right) \;\;\; and \;\;\; U= \left( \begin{array}{cc}1&2\\ \\0&-2\end{array} \right)

the correction vector
\delta \, x= \left( \begin{array}{c}-1\\-\frac{1}{2}\end{array} \right) \;\;\;\; so that \;\;\; x= \left( \begin{array}{c}-4\\\frac{7}{2}\end{array} \right)


Basis of Relaxation method:
Let us interpret the improvement equation as an iterative formula:

\begin{eqnarray} A \cdot \left( x_{k+1}- x_{k}\right)&=& b - A \cdot x_{k} \end{eqnarray}

Replace $A$ on the left hand side by an easily invertible matrix $B$ close to $A$:

\begin{eqnarray} B \cdot \left( x_{k+1}- x_{k}\right)&=& b - A \cdot x_{k} \end{eqnarray}

or
\begin{eqnarray} x_{k+1}&=& B^{-1} \cdot b + B^{-1} \cdot \left[ B- A\right] \cdot x_{k} \end{eqnarray}

This procedure converges to the solution of $ A \cdot x= b$ if | x_{k+1}- x_{k}| < | x_{k}- x_{k-1}| . This is the case if all eigenvalues of the matrix $ B^{-1}\cdot\left[ B- A \right]$ are situated within the unit circle.

Depending on the specific choice of the matrix $ B $ we arrive at the recipes for Jacobi Relaxation, Gauss-Seidel, and Successive Over-Relaxation.



2.3.2 Jacobi Relaxation

Divide the given matrix according to
$ A= D+ L+ R$

where $ D$ contains only the diagonal elements of $ A$, while $L$ and $R$ are the left and right parts of $ A$, respectively.

Choose $B=D$ and write the iteration formula as

\begin{eqnarray} && D \cdot x_{k+1}= b+\left[ D- A\right] \cdot x_{k} \end{eqnarray}
or

\begin{eqnarray} a_{ii} \, x_{i}^{(k+1)}&=&b_{i} -\sum \limits_{j \neq i}a_{ij}\,x_{j}^{(k)}\, ; \;\;\; i=1,\dots,N \end{eqnarray}


EXAMPLE: In $ A \cdot x= b$ let
A= \left( \begin{array}{cc}3&1\\ \\2&4\end{array} \right)\, ; \;\; b=\left( \begin{array}{r} 3 \\ \\2 \end{array} \right)

Starting from the estimated solution

x_{0}= \left( \begin{array}{r} 1.2 \\ \\0.2 \end{array} \right)

and using the diagonal part of $ A$,
D=\left( \begin{array}{cc}3&0\\ \\0&4\end{array} \right)

in the iteration we find the increasingly more accurate solutions

x_{1}= \left( \begin{array}{r} 0.933 \\ -0.100 \end{array} \right)\, ; \; x_{2}= \left( \begin{array}{r} 1.033 \\ \\0.033 \end{array} \right)\; etc. \; \rightarrow \, x_{\infty}= \left( \begin{array}{r} 1 \\ \\0 \end{array} \right)



Convergence rate:

Writing the Jacobi scheme in the form

\begin{eqnarray} x_{k+1}&=& D^{-1} \cdot b+ D^{-1} \cdot \left[D- A \right] \cdot x_{k} \equiv D^{-1} \cdot b+ J \cdot x_{k} \end{eqnarray}

with the Jacobi block matrix

\begin{eqnarray} J &\equiv& D^{-1} \cdot \left[ D- A\right] =- D^{-1} \cdot \left[ L+ R\right] \end{eqnarray}

convergence requires that all eigenvalues of $ J$ be smaller than one (by absolute value). Denoting the largest eigenvalue (the spectral radius) of $J$ by $\lambda_{J}$, we have for the asymptotic rate of convergence

\begin{eqnarray} r_{J} &\equiv& \frac{\left| x_{k+1}- x_{k}\right|}{\left| x_{k}- x \right|} \approx \left| \lambda_{J}-1\right| \end{eqnarray}


In the above example $\lambda_{J}=0.408$ and $r \approx 0.59$.

The electrostatic problem shown before was treated by Jacobi iteration:


Start Applet



2.3.3 Gauss-Seidel Relaxation (GSR)

Somewhat faster convergent than Jacobi.
Choose $B=D+L$ (i. e. lower triangle):

\begin{eqnarray} && \left[ D + L \right] \cdot x_{k+1}= b - R \cdot x_{k} \end{eqnarray}

Solving the set of implicit equations

a_{ii} x_{i}^{(k+1)} = b_{i} - \sum \limits_{j > i} a_{ij} x_{j}^{(k)} - \sum \limits_{j < i} a_{ij} x_{j}^{(k+1)} \; ; \;\;\; i=1,\dots,N     (2.1)

is almost as simple as solving the explicit Jacobi equations. Since the matrices $D+L$ and $R$ are triangular the only change is that in the downward recursion 2.1 some of the right hand terms refer to the present iteration $k+1$ instead of the previous, $k$; however, each term is available as soon as it is required.

EXAMPLE: With the same data as in the previous example we find the first two improved solutions

x_{1}= \left( \begin{array}{r} 0.933 \\ \\0.033 \end{array} \right) \, ; \; x_{2}=\left( \begin{array}{r} 0.989 \\ \\0.006 \end{array} \right) \, .

The convergence rate of the GSR scheme is governed by the matrix

\begin{eqnarray} G &\equiv& - \left[D + L \right]^{-1} \cdot R \end{eqnarray}


It can be shown that the spectral radius of $G$ is given by

\begin{eqnarray} \lambda_{G} &=& \lambda_{J}^{2} \end{eqnarray}

so that the rate of convergence is now
\begin{eqnarray} r_{G} &\approx& \left| \lambda_{J}^{2}-1 \right| \end{eqnarray}


In our example $\lambda_{G}=0.17$ and $r \approx 0.83$.


2.3.4 Successive Over-Relaxation (SOR)

At each iteration step, compute the new vector $x_{k+1}$ using GSR; then "mix it" with the previous vector $x_{k}$:

\begin{eqnarray} x_{k+1}^{SOR}&=&\omega \, x_{k+1}^{GSR}+(1-\omega) x_{k} \end{eqnarray}

The "relaxation parameter" $\omega$ may be varied within the range $0 \leq \omega \leq 2$ to optimize the method.

The complete iteration formula is

\begin{eqnarray} && \left[ D + L \right] \cdot x_{k+1} = \omega \, b - \left[ \omega \, R - (1-\omega) \, A \right] \cdot x_{k} \end{eqnarray}

A single row in this system of equations reads

\begin{eqnarray} a_{ii} x_{i}^{(k+1)} +\sum \limits_{ j < i} a_{ij} x_{j}^{(k+1)} &=& \omega \, b_{i} - \omega \sum \limits_{j>i} a_{ij} \, x_{j}^{(k)}+ \\ && +(1-\omega)\sum \limits_{j\leq i} a_{ij} x_{j}{(k)} \;\;\; i=1, \dots , N \end{eqnarray}


The rate of convergence of this procedure is governed by the matrix

\begin{eqnarray} S &\equiv& - \left[ D + L \right]^{-1} \cdot \left[ \omega \, R - (1-\omega) \, A \right] \end{eqnarray}

The optimal value of $\omega$ is given by

\begin{eqnarray} \omega_{opt} &=& \frac{\textstyle 2}{\textstyle 1+\sqrt{1-\lambda_{J}^{2}}} \end{eqnarray}

yielding

\begin{eqnarray} \lambda_{S} &=& \left[ \frac{\textstyle \lambda_{J}}{\textstyle 1+\sqrt{1-\lambda_{J}^{2}}} \right]^{2} \end{eqnarray}

The asymptotic rate of convergence is
\begin{eqnarray} r_{S} &\approx& \left| \lambda_{S} - 1 \right| \end{eqnarray}




EXAMPLE: With the same data as before we find an optimal relaxation parameter $\omega_{opt}=1.046$, and from that $r_{s}=0.95$. The first two iterations yield

x_{1}= \left( \begin{array}{r} 0.921 \\ \\0.026 \end{array} \right) \, ; \; x_{2}=\left( \begin{array}{r} 0.994 \\ \\0.003 \end{array} \right) \, .





Chebyscheff Acceleration:

During the first few iterative steps the SOR procedure may give rise to overshooting corrections - particularly if $\omega$ is distinctly larger than 1. $\Longrightarrow$ Adjust $\omega$ on the fly: Start out with $\omega = 1$, then approach $\omega_{opt}$.
  • Split the solution vector $x$ in even and odd elements: $x_{e}$, $x_{o}$; do the same with $b$.
  • The two subvectors $x_{e}$ and $x_{o}$ are iterated in alternating succession, with the relaxation parameter being adjusted according to

    \begin{eqnarray} \omega^{(0)}&=&1 \\ \omega^{(1)}&=&\frac{\textstyle 1}{\textstyle 1-\lambda_{J}^{2}/2} \\ \omega^{(k+1)}&=&\frac{\textstyle 1}{\textstyle 1-\lambda_{J}^{2}\omega^{(k)}/4} \,,\;\;k=1, \dots \end{eqnarray}




vesely 2005-10-10

< >