Lagrange polynomial interpolation is particularly convenient when the same values V_{0}, V_{1}, ... V_{n} are repeatevely used in several applications. The data values can be stored in computer memory and number of computations can thus be reduced. Limitations of the Lagrange interpolation occur when additional data points are added or removed to improve the appearance of the interpolating curve. The data set should be completely re-calculated every time when the data points are added or removed. It is thus harder to control the optimal appearence of the curve with the Largange interpolation algorithm compared with the other (Newton) polynomial interpolation.
Consider the following algorithm to construct forward divided differences
data points | data values | first divided difference | second divided difference | third divided difference |
---|---|---|---|---|
V_{0} | I_{0} | f[V_{0},V_{1}] | f[V_{0},V_{1},V_{2}] | f[V_{0},V_{1},V_{2},V_{3}] |
V_{1} | I_{1} | f[V_{1},V_{2}] | f[V_{1},x_{2},V_{3}] | |
V_{2} | I_{2} | f[V_{2},V_{3}] | ||
V_{3} | I_{3} |
First divided difference approximates the slope of the function I = I(V) at the point (I_{k},V_{k}) (cf. the secant method):
f[V_{k},V_{k+1}] = ( I_{k+1} - I_{k} ) / ( V_{k+1} - V_{k} )
Second, third and higher-order divided differences approximate the corresponding higher-order derivatives of the function I= I(V) by using the recursive rule:
f[V_{k},...,V_{k+m}] = ( f[V_{k+1},...,V_{k+m}] - f[V_{k},...,V_{k+m-1}] ) / ( V_{k+m} - V_{m} )
If the data values are equally spaced, i.e. h = V_{k} - V_{k-1} for any k=1,2,...,n, the divided differences can be written in the Taylor series form:
f[V_{k},...,V_{k+m}] = D^{m} f[V_{k}] / ( m! h^{m}),
where D^{m} f[V_{k}] is the m-th order forward difference (see Lecture 3.1), i.e.,
D f[V_{k}] = I_{k+1} - I_{k}
D^{2} f[V_{k}] = I_{k+2} - 2 I_{k+1} + I_{k}
D^{3} f[V_{k}] = I_{k+3} - 3 I_{k+2} + 3 I_{k+1}
- I_{k}
After all necessary divided differences have been computed from the data set, the Newton interpolating polynomials are defined similar to the Taylor polynomials:
I(V) = V_{0} + f[V_{0},V_{1}] (V - V_{0})
+ f[V_{0},V_{1},V_{2}] (V - V_{0}) (V - V_{1})
... + f[V_{0},V_{1},...,V_{n}] (V - V_{0})
(V - V_{1}) ... (V - V_{n-1}).
The Newton interpolating polynomial has also degree n and passes also through the (n+1) given data points. In fact, being expanded in powers of V, this polynomial is the same as the Largange interpolating polynomial in Lecture 2-1. Therefore, the error of Newton interpolation is also the same as the error of the Largange interpolation. The difference between Newton and Lagrange interpolating polynomials lies only in the computational aspect. The advantage of Newton intepolation is the use of nested multiplication and the relative easiness to add more data points for higher-order interpolating polynomials.
More algorithms of numerical interpolations are in use in modern numerical analysis. The chapters on Hermite, Chebyshev, and Pade interpolations are left out for more advance studies (optional chapters 4.5,4.6 of the main textbook).