3.3 Random Sequences
Subsections
3.3.1 Basics
So far: random numbers, preferably no serial
correlations $\langle x_{n}x_{n+k} \rangle$.
Now: sequences of random numbers with given serial correlations.
Distribution in function space
Let $\{x(t)\}$ be an ensemble of functions of time $t$.
Then
$
\begin{eqnarray}
P_{1}(x;t) & \equiv & {\cal P} \{x(t) \leq x\}
\;\;\;\;
{\rm and}
\;\;\;\;
p_{1}(x;t) \equiv \frac{\textstyle dP_{1}(x;t)}{\textstyle dx}
\end{eqnarray}
$
are the probability distribution and the respective density.
EXAMPLE:
Let $x_{0}(t)$ be a deterministic function of time, and assume that the
quantity $x(t)$ at any time $t$ be Gauss distributed about the value
$x_{0}(t)$:
$
p_{1}(x;t)=\frac{\textstyle 1}{\textstyle \sqrt{\textstyle 2\pi \sigma ^{2}}}
e^{\textstyle -\frac{1}{2} \left[x-x_{0}(t)\right]^{2}/ \sigma ^{2}}
$
A random process is called a random sequence if the variable
$t$ may assume only discrete values $\{t_{k}; \; k=0,1,\dots \}$. In
this case one often writes $x(k)$ for $x(t_{k})$.
Higher Distributions
The foregoing definitions may be generalized in the following manner:
$
\begin{eqnarray}
\!\! P_{2}(x_{1},x_{2};t_{1},t_{2}) & \equiv & {\cal P} \{ x(t_{1})\leq x_{1},
x(t_{2})\leq x_{2} \}
\\
\vdots \\
\!\! P_{n}(x_{1},\dots,x_{n};t_{1},\dots,t_{n}) & \equiv & {\cal P}
\{ x(t_{1})\leq x_{1},\dots,x(t_{n})\leq x_{n} \}
\end{eqnarray}
$
Thus $P_{2}(..)$ is the compound probability for the events
$x(t_{1})\leq x_{1}$ and $x(t_{2})\leq x_{2}$.
These higher order distribution functions and the corresponding densities
$
\begin{eqnarray}
p_{n}(x_{1},\dots,x_{n};t_{1},\dots,t_{n}) & = &
\frac{\textstyle d^{n}P_{n}(x_{1}, \dots , x_{n};t_{1},
\dots,t_{n})}{\textstyle dx_{1} \dots dx_{n}}
\end{eqnarray}
$
describe the random process in ever more - statistical - detail.
Stationarity:
A random process is stationary if
$
\begin{eqnarray}
P_{n}(x_{1},\dots,x_{n};t_{1},\dots,t_{n}) & = &
P_{n}(x_{1},\dots,x_{n};t_{1}+t,\dots,t_{n}+t),
\end{eqnarray}
$
This means that the origin of time is of no importance:
$
\begin{eqnarray}
p_{1}(x;t)=p_{1}(x)
\;\;\;\;
& {\rm and} &
\;\;\;\;
p_{2}(x_{1},x_{2};t_{1},t_{2})=p_{2}(x_{1},x_{2};t_{2}-t_{1})
\end{eqnarray}
$
Autocorrelation:
$
\begin{eqnarray}
\langle x(0)x(\tau) \rangle &\equiv&
\int \limits_{\textstyle a}^{\textstyle b}\!\int \limits_{\textstyle a}^{\textstyle b}
x_{1} x_{2} p_{2}(x_{1},x_{2};\tau)dx_{1} dx_{2},
\end{eqnarray}
$
For
$\tau \rightarrow 0$ the autocorrelation function (acf) approaches the
variance $\langle x^{2} \rangle$. For finite
$\tau$ it tells us how rapidly a particular value of
$x(t)$ will be "forgotten".
Gaussian process:
The random variables
$x(t_{1}), \dots , x(t_{n}) $ obey a multivariate Gaussian distribution.
The covariance matrix elements are
$\langle x(0)x(t_{j}-t_{i}) \rangle$, i.e. the
values of the autocorrelation function at the specific time displacement:
$
\begin{eqnarray}
p_{2}(x_{1}, x_{2};\tau) & = &
\frac{\textstyle 1}{\textstyle \sqrt{\textstyle (2\pi)^{2} S_{2}(\tau)}}
e^{\textstyle -\frac{1}{2}Q}
\end{eqnarray}
$
with
$
\begin{eqnarray}
Q & \equiv &
\frac{\textstyle \langle x^{2} \rangle x_{1}^{2} -2 \langle x(0)x(\tau ) \rangle
x_{1}x_{2} + \langle x^{2} \rangle x_{2}^{2}}{\textstyle S_{2}(\tau)}
\end{eqnarray}
$
and
$
\begin{eqnarray}
S_{2}(\tau) &\equiv& | S_{2}(\tau)| = \langle x^{2} \rangle^{2} -
\langle x(0)x(\tau) \rangle^{2}
\end{eqnarray}
$
Markov Process:
A stationary random sequence $\{x_{n}; n=0, 1 \dots\}$
has the Markov property if its "memory" goes back
only one time step:
$
\begin{eqnarray}
p(x_{n} | x_{n-1} \dots x_{1})&=&p(x_{n}|x_{n-1})
\end{eqnarray}
$
where the conditional density
$
\begin{eqnarray}
p(x_{n}|x_{n-1};\tau) & = &
\frac{\textstyle p_{2}(x_{n-1},x_{n})}{\textstyle p_{1}(x_{n-1})}
\end{eqnarray}
$
is the density of $x_{n}$ under the condition
that $x(n-1)=x_{n-1}$.
Thus all statistical properties of the process are contained in
$p_{2}(x_{n-1},x_{n})$.
An even shorter memory would mean that successive elements
of the sequence were not correlated at all.
Gaussian Markov processes:
To describe them uniquely not even $p_{2}(\dots)$ is needed.
If the autocorrelation function $\langle x(n) x(n+l) \rangle$ is known,
$p_{2}(..)$ and consequently all statistical properties of the process
follow.
Note: The acf of a stationary Gaussian Markov process
is always an exponential:
$
\begin{eqnarray}
\langle x(0)x(\tau) \rangle &=& \langle x^{2} \rangle e^{- \beta \tau}
\end{eqnarray}
$
or
$
\begin{eqnarray}
\langle x(n)x(n+k) \rangle &=& \langle x^{2} \rangle
e^{- \beta \Delta t k}
\end{eqnarray}
$
How to produce a Markov sequence? See below...
3.3.2 Generating a stationary Gaussian Markov sequence
Solve the stochastic differential equation
$
\begin{eqnarray}
\dot{x}(t) &=& - \beta x(t) + s(t)
\end{eqnarray}
$
with a stochastic "driving" process $s(t)$,
assumed to be uncorrelated Gaussian noise, i.e. Gauss distributed about
$\langle s \rangle =0$, with
$\langle s(0) s(t) \rangle = A \delta(t)$.
The general solution to this equation reads
$
\begin{eqnarray}
x(t)&=&x(0)e^{-\beta t} + \int \limits_{\textstyle 0}^{\textstyle t} e^{-\beta (t-t')}
s(t') dt'
\end{eqnarray}
$
Inserting $t=t_{n}$ and
$t=t_{n+1} \equiv t_{n}+\Delta t$ one finds that
$
\begin{eqnarray}
x(t_{n+1})&=&x(t_{n}) e^{-\beta \Delta t} +
\int \limits_{\textstyle 0}^{\textstyle \Delta t} e^{-\beta (\Delta t-t')}
s(t_{n}+t') dt'
\end{eqnarray}
$
At any time $t$, the values of $x(t)$
belong to a stationary Gauss
distribution with $\langle x^{2} \rangle$ $=A/2\beta$,
and the process $\{ x(t_{n}) \}$ has the Markov property.
The integrals
$
\begin{eqnarray}
z(t_{n}) &\equiv& \int \limits_{\textstyle 0}^{\textstyle \Delta t}
e^{-\beta (\Delta t-t')} s(t_{n}+t') dt'
\end{eqnarray}
$
are elements of a random sequence, with $z$ Gauss distributed with zero mean and
$\langle z(t_{n})z(t_{n+k}) \rangle = 0$ for $k \neq 0$. Their variance is
$
\begin{eqnarray}
\langle z^{2} \rangle &=& \frac{A}{2\beta}(1-e^{-2\beta\Delta t})
\end{eqnarray}
$
Here is the resulting recipe
for generating a stationary, Gaussian Markov sequence:
"Langevin Shuffle":
Let the desired stationary Gaussian Markov sequence
$\{x(n);\;n=0,\dots\}$
be defined by the autocorrelation function
$
\langle x(n)x(n+k) \rangle = (A / 2 \beta) e^{-\beta k\Delta t}
$
with given parameters $A$, $\beta$ and $\Delta t$.
Choose a starting value $x(0)$, either as
$x(0)=0$ or from a Gauss distribution with
$\langle x \rangle =0 $ and $\langle x^{2} \rangle =A/2\beta$.
- Draw $z(n)$ from a Gaussian distribution with
$\langle z \rangle =0$ and
$
\begin{eqnarray}
\langle z^{2} \rangle & = & \frac{A}{2\beta}(1-e^{-2\beta \Delta t})
\end{eqnarray}
$
- Construct
$
\begin{eqnarray}
x(n+1)& = & x(n)e^{-\beta\Delta t} + z(n) \end{eqnarray}
$
The random sequence thus produced has the desired properties.
If $\beta \Delta t \ll 1$,
replace the exponential
by its linear Taylor approximation. The iteration prescription then reads
$
x(n+1)=x(n)(1-\beta\Delta t) +z'(n)
$,
where $z'(n)$ is Gaussian with
$\langle {z'}^{2} \rangle =A \Delta t (1-\beta \Delta t)$.
|
EXERCISE:
(See also
here)
Employ the above procedure to generate a Markov sequence
$\{ x_{n}\}$
with a given $\beta$. Check if the sequence
shows the expected autocorrelation.
3.3.3 Wiener-Lévy Process (Unbiased Random Walk)
With $\beta=0$ in the above differential equation, we find
$
\begin{eqnarray}
x(n+1) &=& x(n)+z(n)
\end{eqnarray}
$
where $z(n)$ is Gaussian with
$
\begin{eqnarray}
z(n) &\equiv& \int \limits_{\textstyle 0}^{\textstyle \Delta t} s(t_{n}+t')dt' \;,\;\;\;\;
\langle z \rangle =0 \; , \;\;\;\;
\langle z^{2} \rangle =A\Delta t
\end{eqnarray}
$
Since $z$ and $x$
are uncorrelated, we have
$
\begin{eqnarray}
\langle [x(n)]^{2} \rangle &=& nA\Delta t
\end{eqnarray}
$
EXAMPLE:
Let $x$
be one cartesian coordinate of a diffusing
particle. Then $\langle [x(n)]^{2} \rangle$ is the mean squared
displacement after $n$
time steps. In this case we may relate the
coefficient $A$
to the diffusion constant according to $A=2D$.
Wiener-Lévy process:
Let $A \Delta t$ be given. Choose $x(0)=0$.
-
Pick $z(n)$ from a Gauss distribution with variance
$A \Delta t$.
-
Compute
$
\begin{eqnarray}
x(n+1)&=&x(n)+z(n)
\end{eqnarray}
$
The random sequence thus produced is a nonstationary Gaussian process with
variance $<[x(n)]^{2}>=n A \Delta t$.
|
EXERCISE:
(See also
here)
500 random walkers set out from positions $x(0)$
homogeneously
distributed in the interval $[-1,1]$. The initial particle density is
thus rectangular. Each of the random walkers is now set on its course to
perform its own one-dimensional trajectory, with $A\Delta t = 0.01$.
Sketch the particle density after 100, 200, ... steps.
It is not really necessary to draw $z(n)$ from a Gaussian
distribution. If $z(n)$ comes from an equidistribution
in $[-\Delta x/2,\; \Delta x/2]$, the "compound" $x$-increment
after every $10-15$ steps will again be Gauss distributed (central limit theorem).
We may even discretize the $x$-axis:
$z=0, \; \pm \Delta x$ with equal probability $1/3$:
after many steps, and on a scale which makes
$\Delta x$
appear small, the results will again be the same.
To simulate 2- or 3-dimensional diffusion, apply the
above procedure independently to 2 or 3 coordinates.
3.3.4 Markov Chains (Biased Random Walk)
A Markov sequence of discrete $x_{\alpha}$
is called a Markov chain.
We generalize the discussion to
"state vectors"
$\{ x_{\alpha}, \alpha=1, \dots M \}$.
The conditional probability
$
\begin{eqnarray}
p_{\alpha \beta} & \equiv &
{\cal P} \{ x(n)=x_{\beta} | x(n-1)= x_{\alpha} \}
\end{eqnarray}
$
is called transition probability between the states
$\alpha$ and $\beta$
Let $M$
be the total number of possible states. The $M \times M$
-matrix
$P \equiv \{ p_{\alpha \beta}\}$
and the
$M$-vector $ p$
consisting of the individual probabilities
$ p_{\alpha} \equiv {\cal P} \{ x= x_{\alpha} \}$
determine the statistical properties of the Markov chain uniquely.
A Markov chain is reversible if
$
\begin{eqnarray}
p_{\alpha} p_{\alpha \beta} & = & p_{\beta}p_{\beta \alpha}
\end{eqnarray}
$
- Meaning?
The $M^{2}$ elements of the matrix $P$ are not uniquely defined by the $M(M-1)/2$
reversibility conditions.
$\Longrightarrow$ For a given distribution density
$p$ there are many reversible transition matrices.
$\Longrightarrow$ "Asymmetrical rule" (N. Metropolis):
N. Metropolis' asymmetric rule:
Let $Z$ be the number of states $x_{\beta}$ accessible from
$x_{\alpha}$, and let the a priori access probability be
$\pi_{\alpha \beta } = 1/Z$. Then
$
p_{\alpha \beta} = \pi_{\alpha \beta } \;\;\; {\rm if} \;\; p_{\beta}
\geq p_{\alpha}
$
$
p_{\alpha \beta} = \pi_{\alpha \beta }
\frac{\textstyle p_{\beta}}{\textstyle p_{\alpha}}
\;\;\; {\rm if} \;\; p_{\beta} < p_{\alpha}
$
$\Longrightarrow$
$p_{\alpha \beta}$ is reversible!
|
3.3.5 Monte Carlo Method
Central theorem:
If the stationary Markov chain characterized by
$p \equiv \{ p_{\alpha} \}$ and $P \equiv \{ p_{\alpha \beta} \}$
is reversible, then each state $x_{\alpha}$
will be visited, in the course of a sufficiently long chain,
with the relative frequency $p_{\alpha}$.
$\Longrightarrow$
Here is yet another recipe for generating random numbers with a
given probability density $p$:
Random numbers a la Metropolis:
Let $p \equiv \{ p_{\alpha}; \; \alpha=1,2,\dots \}$
be the vector of probabilities of the events $x=x_{\alpha}$.
To generate a random sequence
$\{ x(n)\}$ in which the relative frequency of the event
$x(n)=x_{\alpha}$ approaches $p_{\alpha}$:
- After the $n$-th step, let $x(n)=x_{\alpha}$. Draw a
value $x_{\beta}$ from a region around
$x_{\alpha}$, e.g. according to
$
x_{\beta}=x_{\alpha}+ \left( \xi -0.5 \right) \Delta x
$
where
$\xi$ $\in (0,1)$.
- If for $p_{\beta} \equiv p(x_{\beta})$ we have
$p_{\beta} \geq p_{\alpha}$, then let $x(n+1)=x_{\beta}$.
- If $p_{\beta} < p_{ \alpha } $, then pick a random number
$\xi$ $\in (0,1)$; if $\xi < p_{\beta}/p_{\alpha}$,
let $x(n+1)=x_{\beta}$; else put
$x(n+1)=x_{\alpha}$.
Adjust the parameter $\Delta x$ such that approximately
one out of two trial moves leads to a new state,
$x(n+1)=x_{\beta}$.
Warning: The random numbers thus produced are serially correlated:
$\langle x(n)x(n+k) \rangle \neq 0$.
EXERCISE:
(See also
here)
Let
$p(x)=A \, \exp[-x^{2}]$ be the desired probability density. Apply
Metropolis' prescription to generate random numbers with this
density. Confirm that
$\langle x(n)x(n+k) \rangle \neq 0$.
Advantage of Metropolis' method: only
$p_{\beta}/p_{\alpha}$ is needed, not $p_{\alpha}$.
$\Longrightarrow$
Statistical-mechanical Monte Carlo simulation:
only relative thermodynamic probabilities needed!
vesely
2005-10-10