To get fast function evaluations we use tables of function values f(x_i) at equidistant points x_i=i*Δ. Taking for any x the nearest point x_i with |x_i-x|<Δ we can approximate f(x)≅f(x_i). This is the fastest and least accurate approximation. What is its error ?

Assuming that the first derivative (df/dx) is a constant, we get that the error of this approximation is

|f(x)-f(x_i)|≅0.5Δ*(df/dx) at x-x_i=Δ/2.

This means that the error is inversely proportional to the size of the table and that we need huge tables to get reasonably accurate approximations. Can we do better ?

Linear interpolation gives a far better approximation. We use the table entries below and above x with x_i<x<x_(i+1) and an interpolation parameter δ. The approximation is then

f(x)≅F(x)=f(x_i)(1-δ/Δ)+f(x_(i+1))*δ/Δ,

which too is very fast. How accurate is this approximation ?

This linear approximation has roughly the same first derivative as the exact function and thus the error now depends on the second derivative (d²f/dx²). Assuming that it is a constant we get

E=|F(x)-f(x)|≅0.5*δ(Δ-δ)*(d²f/dx²),

which has a maximum at δ=0.5Δ. Its value is

|F(x)-f(x)|≅0.125Δ²*(d²f/dx²).

The error of the linear approximation is roughly the square of the small error of the nearest point approximation and becomes thus much smaller. It takes about three times the computation time and should be preferred.

If we have a simple approximation of the second derivative, then we can make an even better approximation with

f(x)≅F(x)+0.5*δ(Δ-δ)*(d²f/dx²).

Its error is proportional to the third derivative and only about Δ³*(d³f/dx³).

So we have three different approximations and a trade-off between computation time, accuracy and memory requirements. The ideal choices for the method and the table size depend on the hardware, the programming language and the compiler or interpreter. You will see a surprising example in the next post.

### Like this:

Like Loading...

*Related*