Multivariate polynomial interpolation

Here is a summary of my on-going investigations into the error in multivariate interpolation. This page is organised to more generally serve those interested in multivariate polynomial interpolation (error formulae and computations), and contributions are most welcome.

I am interested in bounding the p-norm of the error in a multivariate polynomial interpolation scheme by norms of derivatives which kill the interpolating space (and require the function interpolated to be no smoother than is necessary). These are the estimates that numerical analysts use to show that a numerical scheme based on multivariate polynomial interpolation has the highest order of accuracy that its polynomial reproduction allows. Because of this interest (especially in the area of finite elements), there is a large literature dealing with these questions, particularly for linear interpolation on a triangle.

The basic idea behind all of the constructive work to date, is to find a pointwise error formulae that involve integrals of the desired derivatives. Even in the simplest of cases, linear interpolation on a triangle, there are many possible ways to do this (see The error in linear interpolation at the vertices of a simplex, Waldron 1994). Good examples of such pointwise error formulae include my work dealing with the error in Kergin and Hakopian interpolation, Sauer and Xu's error formula, de Boor's multivariate divided difference, and the references therein. Kergin and Hakopian interpolation are `lifted' versions of Hermite interpolation.

Given a pointwise error formulae involving the desired derivatives, it is generally not clear how to obtain L_p-error bounds (for finite p) from it. In many cases L_p-error bounds can be obtained by using a multivariate form of Hardy's inequality, while for others there is currently no method for obtaining such bounds (and this appears to be a fundamental limitation these formulae).

There are a number of methods that are used for obtaining pointwise error formulae for multivariate polynomial interpolation schemes including

  1. The error is written in a form which involves the function values (and derivatives) that are matched by the interpolation, then suitably differenced to introduce the desired derivatives. The key tool here is the fundamental theorem of calculus in two of its guises: the integration by parts formula and the recurrence relation for simplex splines.
  2. A version of the multipoint Taylor formula (see here). This formula is well-known in finite elements, but has received little attention from approximation theorists.
  3. A Newton form for the interpolation polynomial is used to give an error formula in a similar way to the classical univariate setting. Or, to quote Thomas Sauer, ``the crucial idea is to find a RECURSIVE way of building up the interpolation polynomial, or the remainder term, and to this you apply the fundamental theorem of calculus''.

Some well-known work that uses `differencing' to find error formulae

Some well-known work on the multipoint Taylor formula

Some recent work that uses a `Newton form' to find error formulae

Some more recent papers dealing with multivariate polynomial interpolation

Relevant books

References

Some work in progress and people involved

Univariate interpolation

Due to oversight, the crucial role that univariate splines play in describing the error in Hermite interpolation has only recently been exploited to obtain L_p-error bounds. To find out about recent advances see Project Hermite .


This document is maintained by Shayne (waldron@math.auckland.ac.nz). Last Modified: .