From: allen_abrahamson@my-deja.com
Subject: Re: best polynomial fit
Date: Wed, 20 Sep 2000 03:44:37 GMT
Newsgroups: sci.math.num-analysis
Summary: [missing]
In article <39C5F71B.6024FAB4@natlab.research.philips.com>,
C.I.Softley@ncl.ac.uk wrote:
> David Mehringer wrote:
> >
> > Hi,
> > I was wondering if someone could help me with the following problem.
> >
> > I have N (x,y) data points to which I want to fit polynomial of degree A
> > (obviously A < N).
> >
[snip]
> A simpler approach to finding the almost-minimax polynomial is
> to find a polynomial in terms of Chebyshev polynomials
> which interpolates all of
> your data points, then throw away however many high-order terms until
> you have the degree of polynomial you want. If you want a
> straightforward polynomial, rather than one in terms of Chebyshev
> polynomials, it's possible to do that conversion. Maple can also do
> this, but the reason I'm mentioning it is that source code for this, as
> well as source code for least-squares fitting, can be found in the
> 'Numerical Recipes' books. The 'C' and FORTRAN incarnations of these are
> available online at http://lib-www.lanl.gov/numerical/
Finally, somebody in this thread suggests orthogonal polynomial fitting!
(Apologies, there was one off-hand mention somewhat earlier). The
_Numerical Recipes_ discussion is quite clear in the "how". They even
go so far as to state flat out: never fit with polynomials.
But may I suggest the "why" polynomial fitting in high degree is doomed
if there is any noise in the data, and why orthogonal methods will hold
a curve's shape? (Of course, this is a rhetorical question; I'm going to
in any case :-)
Ultimately, curve fitting methods solve a linear system; with polynomial
coefficients being the "unknowns". The "columns" of the data matrix are
the successive powers of the abscissa values. Make up a simple example,
put N (20 or 50 or 100) integers in a spreadsheet column, their squares
in the next column, their cubes next, etc. Check the correlations
between the columns: all are well above 0.9. If this is X, and the
solution entails X'X, the matrix is exceedingly ill conditioned, and any
noise on the other side will be greatly amplified in the resultant
polynomial coefficients. And a little noise in a high coefficient can
translate to a lot of wriggle. On the other hand, columns of successive
order othogonal polynomials' values for the abscissa values (transformed
to the [-1,1] interval of the Chebychev and Legendre polynomials) will
tend to have very low colinearity, the numerical reflection of the
pairwise products integrating to zero under some weight function. This
results in a much higher condition for the solution's matrix. In
passing, this is also an argument for using Legendre, rather than the
minimax Chebychev polynomials, in empirical situations; the former's
weight function is unity, and the solution system's condition is often
better.
So it is the preservation of the condition of the solution, as much or
more than the minimax property, that makes orthogonal polynomial fitting
a vastly superior methodology over simply fitting to successive powers
of the abscissa.
I'm also surprised there has not been one word about splines. But
that's another topic.
Sent via Deja.com http://www.deja.com/
Before you buy.