Franz J. Vesely > CompPhys Tutorial > Differential Equations 
 
 





 
< >
Ch. 4 Sec. 3




4.3 Boundary Value Problems

Examples:
- Poisson's and Laplace's equations,
$d^{2} \phi /dx^{2}=-\rho(x)$, or

$ \begin{eqnarray} \frac{\textstyle d \phi}{\textstyle dx} \;\; & = & \;\; - e \\ \frac{\textstyle de}{\textstyle dx}\;\; & = & \;\; \rho(x) \end{eqnarray} $

where $\rho(x)$ is a charge density. (Laplace: $\rho(x)=0$). Another physical problem described by the same equation is the temperature distribution along a thin rod: $d^{2}T/dx^{2}=0$.

- Time independent Schroedinger equation for a particle of mass $m$ in a potential $U(x)$:

$ \begin{eqnarray} \frac{\textstyle d^{2}\psi}{\textstyle dx^{2}} \;\; \;\; & = & \;\; -g(x)\psi , \;\;\; {\rm with}\;\;g(x)=\frac{\textstyle 2m}{\textstyle \hbar^{2}}[E-U(x)] \end{eqnarray} $

The preceding physical examples belong to an important subclass of the general boundary value problem, in that they are all of the form $d^{2}y/dx^{2}=-g(x)y+s(x)$. More generally, the 1-dimensional BVP reads

$ \begin{eqnarray} \frac{\textstyle dy_{i}}{\textstyle dx} \;\; & = & \;\; f_{i}(x,y_{1},\dots y_{N});\;\;\; i=1,\dots N \end{eqnarray} $

with $N$ boundary values required. Typically there are

$n_{1}$     boundary values $a_{j}\;(j=1,\dots n_{1})$ at $x=x_{1}$, and
$n_{2} \equiv N-n_{1}$ boundary values $b_{k}\;(k=1,\dots n_{2})$ at $x=x_{2}$.

The quantities $y_{i},a_{j}$ and $b_{k}$ may simply be higher derivatives of a single solution function $y(x)$. Two methods are available, the Shooting method and the Relaxation technique.



Subsections



4.3.1 Shooting Method

- Transform the given boundary value problem into an initial value problem with estimated parameters

- Adjust the parameters iteratively to reproduce the given boundary values

First trial shot:
Augment the $n_{1}$ boundary values given at $x=x_{1}$ by $n_{2} \equiv N-n_{1}$ estimated parameters

$ \begin{eqnarray} a^{(1)} & \equiv & \{ a_{k}^{(1)}; \; k=1,\dots n_{2} \}^{T} \end{eqnarray} $

to obtain an IVP. Integrate numerically up to $x=x_{2}$. (For equations of the type $y''=-g(x)y+s(x)$, Numerov's method is best.) The newly calculated values of $b_{k}$ at $x=x_{2}$,

$ \begin{eqnarray} b^{(1)} & \equiv & \{ b_{k}^{(1)}; \; k=1,\dots n_{2} \}^{T} \end{eqnarray} $

will in general deviate from the given boundary values $ b \equiv \{ b_{k}; \dots \}^{T}$. The difference vector $ e^{(1)} \equiv b^{(1)}- b$ is stored for further use.

Second trial shot:
Change the estimated initial values $a_{k}$ by some small amount, $ a^{(2)} \equiv a^{(1)}+\delta a$, and once more integrate up to $x=x_{2}$. The values $b_{k}^{(2)}$ thus obtained are again different from the required values $b_{k}$: $e^{(2)} \equiv b^{(2)}-b$.

Quasi-linearization:
Assuming that the deviations $e^{(1)}$ and $e^{(2)}$ depend linearly on the estimated initial values $a^{(1)}$ and $a^{(2)}$, compute that vector $a^{(3)}$ which would make the deviations disappear:
$ \begin{eqnarray} a^{(3)}& = & a^{(1)}- A^{-1} \cdot e^{(1)},\;\;{\rm with} \;\; A_{ij} \equiv \frac{\textstyle b_{i}^{(2)}-b_{i}^{(1)}}{\textstyle a_{j}^{(2)}-a_{j}^{(1)}} \end{eqnarray} $

Iterate the procedure up to some desired accuracy.

EXAMPLE:
$ \frac{\textstyle d^{2}y}{\textstyle dx^{2}} = -\frac{\textstyle 1}{\textstyle (1+y)^{2}} \;\;\;\;{\rm with} \;\;\;y(0)=y(1)=0 $


* First trial shot: Choose $a^{(1)} \equiv y '(0)=1.0$. Applying 4th order RK with $\Delta x = 0.1$ we find $b^{(1)} \equiv y_{calc}(1)=0.674$. Thus $e^{(1)}\equiv b^{(1)}-y(1) =0.674$.

* Second trial shot: With $a^{(2)}=1.1$ we find $b^{(2)}=0.787$, i.e. $e^{(2)}=0.787$.

* Quasi-linearization: From
$ \begin{eqnarray} a^{(3)}&=&a^{(1)}-\frac{\textstyle a^{(2)}-a^{(1)}}{\textstyle b^{(2)}-b^{(1)}} e^{(1)} \end{eqnarray} $

we find $a^{(3)}=0.405 (\equiv y '(0))$.

Iteration: The next few iterations yield the following values for $a (\equiv y '(0))$ and $b(\equiv y(1))$:

$n$ $a^{(n)}$ $b^{(n)}$
3 0.405 - 0.041
4 0.440 0.003
5 0.437 0.000


(Here ist the
analytical solution .)




4.3.2 Relaxation Method

Discretize $x$ to transform a given DE into a set of algebraic equations. For example, applying DDST to

$ \frac{\textstyle d^{2}y}{\textstyle dx^{2}} = b(x,y) $

we find
$ \frac{\textstyle d^{2}y}{\textstyle dx^{2}} \approx \frac{\textstyle 1}{(\textstyle \Delta x)^{2}}[y_{i+1}-2y_{i}+y_{i-1}] $

which leads to the set of equations

$ y_{i+1}-2y_{i}+y_{i-1}-b_{i}(\Delta x)^{2} = 0,\;\;i=2,\dots M-1 $

Since we have a BVP, $y_{1}$ and $y_{M}$ will be given.

Let $y^{(1)} \equiv \{y_{i}\}$ be an inaccurate (estimated?) solution. The error components

$ \begin{eqnarray} e_{i}&=&y_{i+1}-2y_{i}+y_{i-1}-b_{i}(\Delta x)^{2},\;\;i=2,\dots M-1 \end{eqnarray} $

together with $e_{1}=e_{M}=0$ then define an error vector $e^{(1)}$.

How to modify $y^{(1)}$ to make $e^{(1)}$ disappear? $\Longrightarrow$ Expand $e_{i}$ linearly:

$ \begin{eqnarray} e_{i}(y_{i-1}+\Delta y_{i-1},y_{i}+\Delta y_{i},y_{i+1}+\Delta y_{i+1}) & \approx & e_{i} + \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i-1}} \Delta y_{i-1} + \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i}} \Delta y_{i} + \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i+1}} \Delta y_{i+1} \\ && \\ & \equiv & e_{i} +\alpha_{i} \Delta y_{i-1} +\beta_{i} \Delta y_{i} + \gamma_{i} \Delta y_{i+1} \;\;(i=1,\dots M) \end{eqnarray} $

This modified error vector is called $e^{(2)}$. We want it to vanish, $e^{(2)}=0$:

$ \begin{eqnarray} A \cdot \Delta y & = & - e^{(1)} \;\;\;\;{\rm with}\;\;\; A=\left(\begin{array}{ccccc} 1 & 0 & 0 & \dots \\ \alpha_{2}& \beta_{2} & \gamma_{2} & 0 \\ & \ddots & \ddots & \ddots \\ && 0 & 1 \end{array} \right) \end{eqnarray} $

Thus our system of equations is tridiagonal: $\Longrightarrow$ Recursion technique!



EXAMPLE:
$ \begin{eqnarray} \frac{\textstyle d^{2}y}{\textstyle dx^{2}} & = & -\frac{\textstyle 1}{\textstyle (1+y)^{2}} \;\;\;\; {\rm with}\;\;\; y(0)=y(1)=0 \end{eqnarray} $

DDST leads to $e_{i} = y_{i+1}-2y_{i}+y_{i-1}+(\Delta x)^{2}/(1+y_{i})^{2}$. Expand:

$ \begin{eqnarray} \alpha_{i} &\equiv & \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i-1}} =1; \;\; \gamma_{i} \equiv \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i+1}} =1 ; \;\; \beta_{i} \equiv \frac{\textstyle \partial e_{i}}{\textstyle \partial y_{i}} = - 2 \left[ 1+ \frac{\textstyle (\Delta x)^{2}}{\textstyle (1+y_{i})^{3}} \right] \;\;\;i=2,\dots M-1 \end{eqnarray} $

Start the downwards recursion: $g_{M-1}=-\alpha_{M}/\beta_{M}=0$ and $h_{M-1}=-e_{M}/\beta_{M}=0$.

$ \begin{eqnarray} g_{i-1}&=&\frac{\textstyle -\alpha_{i}}{\textstyle \beta_{i}+\gamma_{i}g_{i}}= \frac{\textstyle -1}{\textstyle \beta_{i}+g_{i}};\;\; h_{i-1}=\frac{\textstyle -e_{i}-h_{i}}{\textstyle \beta_{i}+g_{i}} \end{eqnarray} $

brings us down to $g_{1},h_{1}$. Putting
$ \begin{eqnarray} \Delta y_{1}&=&\frac{\textstyle -e_{1}-\gamma_{1}h_{1}}{\textstyle \beta_{1}+\gamma_{1}g_{1}}=e_{1} (=0) \end{eqnarray} $

we take the upwards recursion
$ \begin{eqnarray} \Delta y_{i+1}&=&g_{i}\Delta y_{i}+h_{i};\;\;i=1,\dots M-1 \end{eqnarray} $

Improve $y_{i} \longrightarrow y_{i}+\Delta y_{i}$ and iterate.


vesely 2005-10-10

< >