Lagrange's interpolating polynomial is straightforward, but not terribly efficient; every time we change points or x we have to redo the entire calculation. Ideally we'd like to be able to perform:
Some computation on points that would let us efficiently evaluate the fitted polynomial for different values of x in O(n) time, or
A computation on a particular x that would let us efficiently add new points to the set we use to generate the interpolating polynomial.
"Neville's algorithm" lets us generate the same interpolating polynomial recursively. By flipping the recursion around and generating values from the bottom up, we can achieve goal #2 and add new points incrementally.
Neville's Algorithm
Start the recursion with a single point. Any point Loading... has a unique 0th order polynomial passing through it - the constant function Loading.... For points Loading..., Loading..., let's call this Loading..., Loading..., etc.
Loading... is the unique FIRST order polynomial (i.e., a line) going through points Loading... and Loading....
this first recursive step gives us this rule:
Loading...
For higher order terms like Loading..., let's call Loading... 'P_l', and Loading... 'P_r' (the polynomial fitted through the left and right set of points).
Similarly, the left and rightmost inputs - Loading... and Loading... - will be Loading... and Loading....
Neville's algorithm states that:
Loading...
This recurrence works because the two parents Loading... and Loading... already agree at all points except Loading... and Loading....
Neville's algorithm generates each new polynomial from Loading... and Loading..., using this recursion to incorporate the full set of points.
You can write these out these relationships in a "tableau" that grows to the right from an initial left column:
p0
\
p01
/ \
p1 p012
\ / \
p12 p0123
/ \ / \
p2 p123 p01234
\ / \ /
p23 p1234
/ \ /
p3 p234
\ /
p34
/
p4
The next few functions will discuss "rows" and "columns" of the tableau. That refers to the rows and columns of this representation;
p0 p01 p012 p0123 p01234
p1 p12 p123 p1234 .
p2 p23 p234 . .
p3 p34 . . .
p4 . . . .
. . . . .
. . . . .
. . . . .
The first column here is the initial set of points. Each entry in each successive column is generated through some operation between the entry to its left, and the entry one left and one down.
Look again at Neville's algorithm:
Loading...
Loading... refers to the entry in the same row, previous column, while Loading... is one row lower, previous column.
If each cell in the above tableau tracked:
the value of P(x) for the cell
Loading..., the x value of the leftmost point incorporated so far
Loading..., the right point
we could build up Neville's rule incrementally. Let's attempt to build a function of this signature:
(comment
(defn_neville-incremental
"Takes a potentially lazy sequence of `points` and a point `x` and generates a
lazy sequence of approximations of P(x).
entry `N` in the returned sequence is the estimate using a polynomial
generated from the first `N` points of the input sequence."
[_points _x]))
nil
First, write a function to process each initial point into a vector that contains each of those required elements:
(defn-neville-prepare
"Processes each point of the form `[x, (f x)]` into:
```
$$[x_l, x_r, p]$$
```
where $p$ is the polynomial that spans all points from $l$ to $r$. The
How do we know this works? We can prove it by using generic arithmetic to compare the full symbolic lagrange polynomial to each entry in the successive approximation.
(comment
(defnlagrange-incremental
"Generates a sequence of estimates of `x` to polynomials fitted to `points`;
each entry uses one more point, just like [[neville-incremental]]."
The above pattern, of processing tableau entries, is general enough that we can abstract it out into a higher order function that takes a prepare and merge function and generates a tableau. Any method generating a tableau can use a present function to extract the first row, OR to process the tableau in any other way that they like.
This is necessarily more abstract! But we'll specialize it shortly, and rebuild neville-incremental into its final form.
I'm keeping points in the argument vector for now, vs returning a new function; if you want to do this yourself, curry the function with (partial tableau-fn prepare merge present).
(defntableau-fn
"Returns a Newton-style approximation tableau, given:
- `prepare`: a fn that processes each element of the supplied `points` into
the state necessary to calculate future tableau entries.
- `merge`: a fn of `l`and `r` the tableau entries:
```
l -- return
/
/
/
r
```
the inputs are of the same form returned by `prepare`. `merge` should return a
new structure of the same form.
- `points`: the (potentially lazy) sequence of points used to generate the
And now, [[neville]], identical to [[neville-incremental]] except using the generic tableau generator.
The form of the tableau also makes it easy to select a particular column instead of just the first row. Columns are powerful because they allow you to successively interpolate between pairs, triplets etc of points, instead of moving onto very high order polynomials.
I'm not sure it's the best interface, but we'll add that arity here.
(defnneville
"Takes:
- a (potentially lazy) sequence of `points` of the form `[x (f x)]` and
- a point `x` to interpolate
and generates a lazy sequence of approximations of `P(x)`. Each entry in the
return sequence incorporates one more point from `points` into the `P(x)`
estimate.
Said another way: the Nth in the returned sequence is the estimate using a
polynomial generated from the first `N` points of the input sequence:
```
p0 p01 p012 p0123 p01234
```
This function generates each estimate using Neville's algorithm:
Press's Numerical Recipes, chapter 3 (p103) ( http://phys.uri.edu/nigh/NumRec/bookfpdf/f3-1.pdf ) describes a modified version of Neville's algorithm that is slightly more efficient than the version above.
Allan Macleod, in "A comparison of algorithms for polynomial interpolation", discusses this variation under the name "Modified Neville".
By generating the delta from each previous estimate in the tableau, Modified Neville is able to swap one of the multiplications above for an addition.
To make this work, instead of tracking the previous Loading... estimate, we track two quantities:
Loading... is the delta between Loading... and Loading..., i.e., Loading....
Loading... is the delta between Loading... and Loading..., i.e., Loading....
We can recover the estimates generated by the original Neville's algorithm by summing C values across the first tableau row.
Equation 3.1.5 in Numerical recipes gives us the equations we need:
Loading...Loading...
These equations describe a merge function for a tableau processing scheme, with state == [x_l, x_r, C, D].
Let's implement each method, and then combine them into final form. The following methods use the prefix mn for "Modified Neville".
(defn-mn-prepare
"Processes an initial point [x (f x)] into the required state:
tableau-fn allows us to assemble these pieces into a final function that has an interface identical to neville above. The implementation is more obfuscated but slightly more efficient.
(defnmodified-neville
"Similar to [[neville]] (the interface is identical) but slightly more efficient.
Internally this builds up its estimates by tracking the delta from the
previous estimate.
This non-obvious change lets us swap an addition in for a multiplication,
making the algorithm slightly more efficient.
See [[neville]] for usage information, and info about the required structure
of the arguments.
The structure of the [[modified-neville]] algorithm makes it difficult to
select a particular column. See [[neville]] if you'd like to generate
polynomial approximations between successive sequences of points.
References:
- [\"A comparison of algorithms for polynomial interpolation\"](https://www.sciencedirect.com/science/article/pii/0771050X82900511), A. Macleod
NOTE: These folds and scans seem to be higher performance than the functions above. Prefer *-sum functions when you want to consume a full sequence of points and get the full input, and *-scan functions when you want to observe all intermediate estimates. If you want a column of the tableau, stick with the versions above.
The advantage of the method described above, where we generate an entire tableau and lazily pull the first entry off of each column, is that we can pass a lazy sequence in as points and get a lazy sequence of successive estimates back. If we don't pull from the result sequence, no computation will occur.
One problem with that structure is that we have to have our full sequence of points available when we call a function like neville. As you pull an element off of the returned sequence, it just-in-time generates a new diagonal of the tableau required to realize that number.
Even if the sequence is lazy, this is limiting. What if we want to pause, save the current estimate and pick up later where we left off?
If you stare at this for a while, you might notice that it should be possible to use the merge and present functions we already have to build the tableau one row at a time, given ONLY the previous row:
This method reverses the order of the points, since rows are built from the bottom up:
p4 p43 p432 p4321 p43210
p3 p32 p321 p3210 .
p2 p21 p210 . .
p1 p10 . . .
p0 . . . .
The order of the points is reversed, but this is fine; polynomial interpolation doesn't care about the order of points. (NOTE that this WILL be something we have to consider in the fold version of Richardson extrapolation, in emmy.polynomial.richardson!)
Notice that the /diagonal/ of this tableau is identical to the top row of the tableau before the points were reversed.
Here's something close, using our previous merge and prepare definitions:
(comment
(defngenerate-new-row* [prepare merge]
(fn [prev-row point]
;; the new point, once it's prepared, is the first entry in the new row.
;; From there, we can treat the previous row as a sequence of "r" values.
(reduce merge (prepare point) prev-row))))
nil
There's a problem here. reduce only returns the FINAL value of the aggregation:
If we want to continue building rows, we need the entire new row! Lucky for us, Clojure has a version of reduce, called reductions, that returns each intermediate aggregation result:
Folds are explored (and implemented!) in detail in the [[emmy.algebra.fold]] namespace. See that namespace for an introduction, but I'll add a reminder here.
A fold consists of:
init, an function that returns an initial piece of state called an "accumulator"
a binary merge function that combines ("folds") a new element x into the accumulator, and returns a value of the same shape / type as the accumulator returned by init.
a present function that transforms the accumulator into a final value.
In Clojure, you perform a fold on a sequence with the reduce function:
(reduce merge (init) xs)
For example:
(reduce + 0.0 (range10))
;; => 45.0
Our generate-new-row function from above is exactly the merge function of a fold. The accumulator is the latest tableau row:
init == a function that returns [], the initial empty row.
present == a function similar to neville-present or mn-present that simply sums up the estimate deltas for the entire row, instead of returning the running tally.
present only needs to return the final value in each row because that is the best current estimate given all points supplied so far.
If you want to recover the previous behavior of a lazy sequence of all estimates, the lazy "scan" pattern of emmy.util.aggregate/scan allows you to observe each element of the diagonal of the tableau as it's generated. This is identical to the "first row" of the non-fold tableau.
Now that we've identified this new pattern, we can rewrite generate-new-row to return a new function matching the fold interface described in [[emmy.algebra.fold]]. The new function is called [[tableau-fold-fn]]:
(defntableau-fold-fn
"Given `prepare` and `merge` and `present` functions, returns a fold capable of
aggregating a point of the form [x, f(x)] into an accumulating tableau
row (generating the next tableau row).
The 0-arity of the returned function returns an empty row, `[]`.
The 1-arity calls the supplied `present` on the accumulated tableau row.
The 2-arity scans the supplied `merge` across all entries in the accumulating
row, producing a new row.
### More detail on the arguments:
- `prepare`: a fn that processes each element of the supplied `points` into
the state necessary to calculate future tableau entries.
- `merge`: a fn of `l`and `r` the tableau entries:
l -- return
/
/
/
r
the inputs are of the same form returned by `prepare`. `merge` should return a
new structure of the same form.
- `present`: Transforms a `tableau` row into an estimate at some value `x` of
the polynomial interpolated to hit all supplied points."
Next, we can use this to generate specialized fold functions for our two incremental algorithms above - neville and modified-neville.
Instead of using neville-present as above, neville-fold returns only the final estimate. This is because folds are meant to consume their entire input sequence. If you want to observe intermediate values as they're generated, you can use the "scan" pattern, implemented in neville-scan.
(defnneville-fold
"Given some point `x`, returns a fold that accumulates rows of an interpolation
tableau providing successively better estimates (at the value `x`) of a
polynomial interpolated to all seen points.
The 2-arity aggregation step takes:
- `previous-row`: previous row of an interpolation tableau
- a new point of the form `[x_new (f x_new)]`
and returns the next row of the tableau using the algorithm described in
Instead of using mn-present as above, modified-neville-fold uses the dynamically bound emmy.util.aggregate/*fold* to sum all deltas and return the current best estimate taking all points into account.
If you want to observe intermediate values as they're generated, you can use the "scan" pattern, implemented in neville-scan.
(defn ^:no-docmn-present-final
"Aggregates intermediate deltas to produce an estimate for the final value in
af/fold->scan will return a function that acts identically to the non-fold, column-wise version of the interpolators. It does this by folding in one point at a time, but processing EVERY intermediate value through the presentation function.
Using this function, we specialize to our two incremental methods.
(defnneville-sum
"Returns a function that consumes an entire sequence `xs` of points of the form
`[x_i, f(x_i)]` and returns the best approximation of `x` using a polynomial
fitted to all points in `xs` using the algorithm described in [[neville]].
Faster than, but equivalent to, `(last ([[neville]] xs x))`"
rational.cljc to learn how to interpolate rational functions
richardson.cljc for a specialized implementation of polynomial interpolation, when you know something about the ratios between successive x elements in the point sequence.