This namespace builds on the ideas introduced in riemann.cljc
and midpoint.cljc
, and follows the pattern of those namespaces:
Let's begin.
A nice integration scheme related to the Midpoint method is the "Trapezoid" method. The idea here is to estimate the area of each slice by fitting a trapezoid between the function values at the left and right sides of the slice.
Alternatively, you can think of drawing a line between Loading... and Loading... and taking the area under the line.
What's the area of a trapezoid? The two slice endpoints are
The trapezoid consists of a lower rectangle and a capping triangle. The lower rectangle's area is:
Loading...Just like in the left Riemann sum. The upper triangle's area is one half base times height:
Loading...The sum of these simplifies to:
Loading...Or, in Clojure:
We can use the symbolic algebra facilities in the library to show that this simplification is valid:
We can use qr/windowed-sum
to turn this function into an (inefficient) integrator:
Fitting triangles is easy:
In fact, we can even use our estimator to estimate Loading...:
The accuracy is not bad, for 10 slices:
10000 slices gets us closer:
Fun fact: the trapezoid method is equal to the average of the left and right Riemann sums. You can see that in the equation, but lets verify:
Next let's attempt a more efficient implementation. Looking at single-trapezoid
, it's clear that each slice evaluates both of its endpoints. This means that each point on a border between two slices earns a contribution of Loading... from each slice.
A more efficient implementation would evaluate both endpoints once and then sum (without halving) each interior point.
This interior sum is identical to a left Riemann sum (without the Loading... evaluation), or a right Riemann sum (without Loading...).
Here is this idea implemented in Clojure:
We can define a new pi-estimator
and check it against our less efficient version:
Next let's develop an incremental updater for the Trapezoid rule that lets us reuse evaluation points as we increase the number of slices.
Because interior points of the Trapezoid method mirror the interior points of the left and right Riemann sums, we can piggyback on the incremental implementations for those two methods in developing an incremental Trapezoid implementation.
Consider the evaluation points of the trapezoid method with 2 slices, next to the points of a 4 slice pass:
The new points are simply the midpoints of the existing slices, just like we had for the left (and right) Riemann sums. This means that we can reuse qr/Sn->S2n
in our definition of the incrementally-enabled trapezoid-sequence
:
The following example shows that for the sequence Loading..., the incrementally-augmented trapezoid-sequence
only performs Loading... function evaluations; i.e., the same number of evaluations as the non-incremental (trapezoid-sum f2 0 1)
would perform for Loading... slices. (why Loading...? each interior point is shared, so each trapezoid contributes one evaluation, plus a final evaluation for the right side.)
The example also shows that evaluating every Loading... in the sequence costs Loading... evaluations. As Loading... gets large, this is roughly twice what the incremental implementation costs.
When Loading..., the incremental implementation uses 2049 evaluations, while the non-incremental takes 4017.
Another short example that hints of work to come. The incremental implementation is useful in cases where the sequence includes doublings nested in among other values.
For the sequence Loading... (used in the Bulirsch-Stoer method!), the incrementally-augmented trapezoid-sequence
only performs 162 function evaluations, vs the 327 of the non-incremental (trapezoid-sum f2 0 1)
mapped across the points.
This is a good bit more efficient than the Midpoint method's incremental savings, since factors of 2 come up more often than factors of 3.
The final version is analogous the qr/left-integral
and friends, including an option to :accelerate?
the final sequence with Richardson extrapolation. (Accelerating the trapezoid method in this way is called "Romberg integration".)
The terms of the error series for the Trapezoid method increase as Loading...... (see https://en.wikipedia.org/wiki/Trapezoidal_rule#Error_analysis). Because of this, we pass Loading... into pr/richardson-sequence
below. Additionally, integral
hardcodes the factor of 2
and doesn't currently allow for a custom sequence of Loading.... This is configured by passing Loading... into pr/richardson-sequence
.
If you want to accelerate some other geometric sequence, call pr/richardson-sequence
with some other value of t.
To accelerate an arbitrary sequence of trapezoid evaluations, investigate polynomial.cljc
or rational.cljc
. The "Bulirsch-Stoer" method uses either of these to extrapolate the Trapezoid method using a non-geometric sequence.
If you start with the trapezoid method, one single step of Richardson extrapolation (taking the second column of the Richardson tableau) is equivalent to "Simpson's rule". One step using t=3
, i.e., when you triple the number of integration slices per step, gets you "Simpson's 3/8 Rule". Two steps of Richardson extrapolation gives you "Boole's rule".
The full Richardson-accelerated Trapezoid method is also known as "Romberg integration" (see romberg.cljc
).
These methods will appear in their respective namespaces in the quadrature
package.
See the wikipedia entry on Closed Newton-Cotes Formulas for more details.