From: "David C. Ullrich" Subject: Re: infinite sum of residues Date: Thu, 16 Mar 2000 11:55:00 -0600 Newsgroups: sci.math Summary: [missing] Florian Albers wrote: > Is is possible to contruct meromorphic functions > > f_j : {Re z < 1} ---> C (j=1, 2, ...) > > with Res_{z=0}f_j(z) = 1 and > > sum_{j=1}^{infinity} = 0 in {Re z < -1} ? The answer is almost surely yes. Hard to be certain until you fill in the bit you left out (the sum of _what_ should equal 0?) Hint: Runge's theorem (or something easier) shows that there are entire functions P_n which converge to 1/z uniformly on compact subsets of {Re(z) < -1} ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: best polynomial fit Date: 15 Sep 2000 18:03:27 -0400 Newsgroups: sci.math.num-analysis,sci.physics Summary: [missing] In article , Virgil wrote: :In article :, David :Mehringer wrote: : :> Hi, :> I was wondering if someone could help me with the following problem. :> :> I have N (x,y) data points to which I want to fit polynomial of degree :> A (obviously A < N). :> :> But, I don't know apriori which value of A will produce the "best" :> results. :> Obviously, in general the larger A is, the better the chi square :> values. But, I'd like to minimize A while still producing a :> "reasonable" fit (ie, :> I want to avoid "overfitting" my data). I was wondering if there is :> a standard way to do this. :> :> Thanks and please email. : : :Answer posted and emailed. : :You will have to set your own criteria for what is "best" for you, and :then measure your results against these criteria. : :One primitive way of doing this is to calculate a root mean squared :error for fitted polynomials of increasing degrees until the RMSE is :satisfactorily small. Computers can do this quicly and easily for :several values of A if N is not too large. : :You can then compare the amounts of improvement (decreases in RMSE) as :A increases. Try graphing the RMSE against A for 0 <= A <= N. : :Clearly the RMSE = 0 when A >= N, and will increas as A decreases to A = :0 (constant function) A word of caution: Sometimes the graph of the polynomial minimizes (under constraints on degree) the errors at the data points, but does so "reluctantly, even grudgingly". In between the points, it can go haywire, through unexpected local extrema of excessive magnitude. Many students of numerical analysis know the "Runge Phenomenon": an apparently tame function 1/(1+25*x^2) is interpolated on a uniform mesh within [-1, 1], so that the RMSE is exactly zero, but the values of the interpolating polynomial at some midpoints between the mesh points diverge to infinity. This may be present in a milder form even with least squares fit (of smaller degree). If something like this happens, one may consider minimizing RMSE plus some "penalty function" for excessive size, such as a small multiple of the integral of the square of the polynomial over a suitable interval. This would still be a quadratic minimization problem, leading to a linear system of equations for the coefficients. One more warning: Be careful about what basis of polynomials you use. If it is near-dependent (such as {1, x, x^2, ... , x^A}), chances are that the evaluation will be laden with undue rounding errors (huge coefficients needed to obtain moderately sized data, leading to loss of significant digits). A remedy may consist of pre-orthogonalizing the polynomials over the set of abscissas, or using a near-orthogonal system (because it may be orthogonal under another, closely related scalar product). I have seen a recommendation: use (transformed) Chebyshev polynomials over some interval containing all abscissas, so that you know how much you throw away at worst by dropping the high degree terms. Sorry, no e-mail; I would have to switch between newsreading programs. Cheers, ZVK(Slavek). ============================================================================== From: Eric Rudd Subject: Re: best polynomial fit Date: Fri, 15 Sep 2000 17:10:36 -0500 Newsgroups: sci.math.num-analysis,sci.physics David Mehringer wrote: > I have N (x,y) data points to which I want to fit polynomial of degree A > (obviously A < N). > > But, I don't know apriori which value of A will produce the "best" results. > Obviously, in general the larger A is, the better the chi square values. > But, I'd like to minimize A while still producing a "reasonable" fit (ie, I > want to avoid "overfitting" my data). I was wondering if there is a standard > way to do this. I suppose you are referring to the propensity of high-order polynomials to oscillate wildly between the fitted points. I haven't worked out any rigorous theory about this, but there is something akin to the Nyquist rate and sampling theory going on here. Suppose that you are trying to fit data over the range x = [-1,+1]. A simple trigonometric substitution of x = cos(theta) will map the range x = [-1,+1] into theta = [-pi, +pi] and change the polynomial P(x) into a sum of cosines in theta. I have observed, as a rule of thumb, that the oscillations are not excessive if there are no gaps in the data larger than, say, 1/4 cycle of the highest cosine. This would imply a degree of no larger than half the number of points, and that the data be spaced roughly evenly in theta. -Eric Rudd rudd@cyberoptics.com