next up previous
Next: 2.3.5 Conjugate Gradient Method Up: 2.3 Iterative Methods Previous: 2.3.3 Gauss-Seidel Relaxation (GSR)

2.3.4 Successive Over-Relaxation (SOR)

At each iteration step, compute the new vector $\mbox{$\bf x$}_{k+1}$ using GSR; then ``mix it'' with the previous vector $\mbox{$\bf x$}_{k}$:
$\displaystyle \mbox{$\bf x$}_{k+1}^{SOR}$ $\textstyle =$ $\displaystyle \omega \, \mbox{$\bf x$}_{k+1}^{GSR}+(1-\omega)\mbox{$\bf x$}_{k}$  

The ``relaxation parameter'' $\omega$ may be varied within the range $0 \leq \omega$ $\leq 2$ to optimize the method.

The complete iteration formula is

    $\displaystyle \fbox{
$
\left[ \mbox{${\bf D}$} + \mbox{${\bf L}$} \right] \cdot...
...{\bf R}$} - (1-\omega) \, \mbox{${\bf A}$} \right]
\cdot \mbox{$\bf x$}_{k}
$
}$  


A single row in this system of equations reads
$\displaystyle a_{ii} \, x_{i}^{(k+1)}+\sum_{j<i} a_{ij} \, x_{j}^{(k+1)}$ $\textstyle =$ $\displaystyle \omega \, b_{i}
- \omega \sum_{j>i} a_{ij} \, x_{j}^{(k)}+$  
    $\displaystyle +(1-\omega)\sum_{j\leq i} a_{ij} x_{j}{(k)} \hspace{24pt}
i=1, \dots , N$  




The rate of convergence of this procedure is governed by the matrix
$\displaystyle \mbox{${\bf S}$}$ $\textstyle \equiv$ $\displaystyle - \left[ \mbox{${\bf D}$} + \mbox{${\bf L}$} \right]^{-1} \cdot
\left[ \mbox{${\bf R}$} - (1-\omega) \, \mbox{${\bf A}$} \right]$  

The optimal value of $\omega$ is given by
$\displaystyle \omega_{opt}$ $\textstyle =$ $\displaystyle \frac{2}{1+\sqrt{1-\lambda_{J}^{2}}}$  

yielding
$\displaystyle \lambda_{S}$ $\textstyle =$ $\displaystyle \left[ \frac{\lambda_{J}}{1+\sqrt{1-\lambda_{J}^{2}}}\right]^{2}$  

The asymptotic rate of convergence is
$\displaystyle r_{S}$ $\textstyle \approx$ $\displaystyle \left\vert \lambda_{S} - 1 \right\vert$  






EXAMPLE: With the same data as before we find an optimal relaxation parameter $\omega_{opt}=1.046$, and from that $r_{s}=0.95$. The first two iterations yield

\begin{displaymath}
\mbox{$\bf x$}_{1}=\mbox{$\left( \begin{array}{r} 0.921 \\ \...
...{r} 0.994 \\ \vspace{-9 pt}\\ 0.003 \end{array} \right)$} \, .
\end{displaymath}





Chebyscheff Acceleration:

During the first few iterative steps the SOR procedure may give rise to overshooting corrections - particularly if $\omega$ is distinctly larger than 1. $\Longrightarrow$ Adjust $\omega$ on the fly: Start out with $\omega = 1$, then approach $\omega_{opt}$.

next up previous
Next: 2.3.5 Conjugate Gradient Method Up: 2.3 Iterative Methods Previous: 2.3.3 Gauss-Seidel Relaxation (GSR)
Franz J. Vesely Oct 2005
See also:
"Computational Physics - An Introduction," Kluwer-Plenum 2001