next up previous
Next: 2.3.3 Gauss-Seidel Relaxation (GSR) Up: 2.3 Iterative Methods Previous: 2.3.1 Iterative Improvement


2.3.2 Jacobi Relaxation

Divide the given matrix according to $\mbox{${\bf A}$}=\mbox{${\bf D}$}+\mbox{${\bf L}$}+\mbox{${\bf R}$}$ where $\mbox{${\bf D}$}$ contains only the diagonal elements of $\mbox{${\bf A}$}$, while $\mbox{${\bf L}$}$ and $\mbox{${\bf R}$}$ are the left and right parts of $\mbox{${\bf A}$}$, respectively.

Choose $\mbox{${\bf B}$}=\mbox{${\bf D}$}$ and write the iteration formula as
    $\displaystyle \fbox{
$
\mbox{${\bf D}$} \cdot \mbox{$\bf x$}_{k+1}=\mbox{$\bf b$}+\left[\mbox{${\bf D}$}-\mbox{${\bf A}$}\right] \cdot \mbox{$\bf x$}_{k}
$
}$  

or
$\displaystyle a_{ii} \, x_{i}^{(k+1)}$ $\textstyle =$ $\displaystyle b_{i}-\sum_{j \neq i}a_{ij}\,x_{j}^{(k)}\, ; \;\;\;
i=1,\dots,N$  






EXAMPLE: In $\mbox{${\bf A}$} \cdot \mbox{$\bf x$}=\mbox{$\bf b$}$ let

\begin{displaymath}
\mbox{${\bf A}$}=\mbox{$\left( \begin{array}{cc}3&1\\ \vspac...
...\begin{array}{r} 3 \\ \vspace{-9 pt}\\ 2 \end{array} \right)$}
\end{displaymath}

Starting from the estimated solution

\begin{displaymath}
\mbox{$\bf x$}_{0}=\mbox{$\left( \begin{array}{r} 1.2 \\ \vspace{-9 pt}\\ 0.2 \end{array} \right)$}
\end{displaymath}

and using the diagonal part of $\mbox{${\bf A}$}$,

\begin{displaymath}
\mbox{${\bf D}$}=\mbox{$\left( \begin{array}{cc}3&0\\ \vspace{-9pt}\\ 0&4\end{array} \right)$}
\end{displaymath}

in the iteration we find the increasingly more accurate solutions

\begin{displaymath}
\mbox{$\bf x$}_{1}= \mbox{$\left( \begin{array}{r} 0.933 \\ ...
...\begin{array}{r} 1 \\ \vspace{-9 pt}\\ 0 \end{array} \right)$}
\end{displaymath}





Convergence rate:

Writing the Jacobi scheme in the form
$\displaystyle \mbox{$\bf x$}_{k+1}$ $\textstyle =$ $\displaystyle \mbox{${\bf D}$}^{-1} \cdot \mbox{$\bf b$}+
\mbox{${\bf D}$}^{-1}...
...{${\bf D}$}^{-1} \cdot \mbox{$\bf b$}+\mbox{${\bf J}$} \cdot \mbox{$\bf x$}_{k}$  

with the Jacobi block matrix
$\displaystyle \mbox{${\bf J}$}$ $\textstyle \equiv$ $\displaystyle \mbox{${\bf D}$}^{-1} \cdot \left[ \mbox{${\bf D}$}-\mbox{${\bf A...
...]
=-\mbox{${\bf D}$}^{-1} \cdot \left[ \mbox{${\bf L}$}+\mbox{${\bf R}$}\right]$  

convergence requires that all eigenvalues of $\mbox{${\bf J}$}$ be smaller than one (by absolute value). Denoting the largest eigenvalue (the spectral radius) of $\mbox{${\bf J}$}$ by $\lambda_{J}$, we have for the asymptotic rate of convergence
$\displaystyle r_{J}$ $\textstyle \equiv$ $\displaystyle \frac{\left\vert \mbox{$\bf x$}_{k+1}-\mbox{$\bf x$}_{k}\right\ve...
...x$}_{k}-\mbox{$\bf x$} \right\vert} \approx \left\vert \lambda_{J}-1\right\vert$  


In the above example $\lambda_{J}=0.408$ and $r \approx 0.59$.

The electrostatic problem shown before was treated by Jacobi iteration:

  Start Applet


next up previous
Next: 2.3.3 Gauss-Seidel Relaxation (GSR) Up: 2.3 Iterative Methods Previous: 2.3.1 Iterative Improvement
Franz J. Vesely Oct 2005
See also:
"Computational Physics - An Introduction," Kluwer-Plenum 2001