Next: Bibliography
Up: III. SELECTED APPLICATIONS
Previous: 8.4 Direct Simulation Monte
Appendix
Machine Errors
Typically, in a computer real numbers are stored as follows:
or, in a more usual notation,
- The mantissa is normalized, i.e. shifted to the left as far as
possible, such that there is a in the first position; each left-shift
by one position makes the exponent smaller by . (Since the leftmost
bit of is then known to be , it need not be stored at all, permitting
one further left-shift and a corresponding gain in accuracy; then has an
effective length of 24 bits.)
- The bias is a fixed, machine-specific
integer number to be added to the ``actual'' exponent
, such that the stored exponent remains positive.
EXAMPLE:
With a bias of (and keeping the high-end bit of the
mantissa) the internal representation of the number is, using
and ,
Before any addition or subtraction the exponents of the two arguments must
be equalized; to this end the smaller exponent is increased, and
the respective mantissa is right-shifted (decreased). All bits of the
mantissa that are thus being ``expelled'' at the right end are lost for
the accuracy of the result. The resulting error is called roundoff error.
By machine accuracy we denote the smallest number that, when added
to , produces a result . In the above example the number
, when added to , would just produce a
result , while the next smaller representable number
would leave not a rack behind:
but
A particularly dangerous situation arises when two almost equal numbers have
to be subtracted. Such a case is depicted in Figure
8.4.
Subtraction of two almost equal numbers
In the last (normalization) step the mantissa is arbitrarily filled up by
zeros; the uncertainty of the result is .
There is an everyday task in which such small differences may arise: solving
the quadratic equation . The usual formula
|
(8.91) |
will yield inaccurate results whenever . Since in writing a
program one must always provide for the worst possible case, it is recommended
to use the equivalent but less error-prone formula
|
(8.92) |
with
|
(8.93) |
EXERCISE:
Assess the machine accuracy of your computer by trying various negative powers
of , each time adding and subtracting the number and checking whether
the result is zero.
Discrete Fourier Transformation
FundamentalsWe are using the convention
|
(8.94) |
Assume that the function is given only at discrete, equidistant
values of its argument:
|
(8.95) |
The reciprocal value of the time increment is called
sampling rate. The higher the sampling rate, the more details of the
given function will be captured by the table of discrete values
. This intuitively evident fact is put in quantitative terms by
Nyquist's theorem:
if the Fourier spectrum of ,
|
(8.96) |
is negligible for frequencies beyond the critical (or Nyquist)
frequency
|
(8.97) |
then is called a band-limited process. Such a process is
completely determined by its sampled values .
The formula that permits the reconstruction of from the sampled data
reads
|
(8.98) |
(In contrast, if is not band-limited, sampling with finite time
resolution results in ``mirroring in'' the outlying parts of the
spectrum from beyond , superposing them on the correct
spectrum. In signal processing this effect is known as ``aliasing''.)
Let us assume now that a finite set of sampled values is given:
|
(8.99) |
and let be an even number. Define discrete frequencies by
|
(8.100) |
(The pertaining to is again the Nyquist frequency.) Then the
Fourier transform of at some frequency is given by
|
(8.101) |
Thus it makes sense to define the discrete Fourier transform
as
According to 8.101 the Fourier transform proper is just
.
From the definition of it follows that
. We make
use of this periodicity to renumber the such that
runs from to (instead of to ):
, , , , , ,
, , , ,
With this indexing convention the back transformation may be conveniently
written
|
(8.102) |
Fast Fourier Transform (FFT)If we were to use the definition 8.102 ``as is'' to calculate
the discrete Fourier transform, we would have to perform some
operations. Cooley and Tukey (and before them Danielson and
Lanczos; see [PRESS 86]) have demonstrated how, by smart handling of data,
the number of operations may be pushed down to
.
Note that for this is an acceleration of . Indeed, many
algorithms of modern computational physics hinge on this possibility of
rapidly transforming back and forth long tables of function values.
In the following it is always assumed that . If is not a
power of , simply ``pad'' the table, putting up to the
next useful table length. Defining
|
(8.103) |
we realize that
etc. The discrete Fourier transform
is therefore
where the indices e and o stand for ``even'' and
``odd''. Next we treat each of the two terms to the right of
8.107 by the same pattern, finding
By iterating this procedure times we finally arrive at terms
that are identical to the given table values .
EXAMPLE:
Putting we have
and
|
|
|
(8.109) |
|
|
|
(8.110) |
|
|
|
(8.111) |
|
|
|
(8.112) |
|
|
|
(8.113) |
Thus the correspondence between the table values and the terms
etc. is as follows:
EXERCISE:
Demonstrate that a similar analysis as above leads for to the
correspondences
It is easy to see that this correspondence is reproduced by the following
rule: 1) put
and
, such that
etc.;
2) reverse the bit pattern thus obtained and interpret the result as an
integer number:
.
In other words, arrange the table values in bit-reversed order.
(For example, is at position since
.)
The correctly arranged are now combined in pairs according to
8.114. The rule to follow in performing this ``decimation'' step
is sketched in Fig. 8.4.
Decimation for
On each level () the terms are combined according to
|
(8.114) |
It is evident that the number of operations is of order
.
Further details of the method, plus sample programs in Pascal, Fortran,
or C are given in [PRESS 86].
EXERCISE:
Sketch the pattern of Figure 8.4 for and perform
the ``decimation''. Compare your result to equ. 8.114.
Next: Bibliography
Up: III. SELECTED APPLICATIONS
Previous: 8.4 Direct Simulation Monte
Franz J. Vesely Oct 2005
See also: "Computational Physics - An Introduction," Kluwer-Plenum 2001