From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: Numerical integration: a question Date: 13 Apr 2000 09:25:38 GMT Newsgroups: sci.math.num-analysis Summary: [missing] In article <38F556B7.4AD3080E@NoSPAMeecs.umich.edu>, Thomas Kragh writes: |> If your data is given to you on a uniformly-sampled set of points, and |> you do not have any other information about the function, I would say |> that an iterated Simpson's Rule is probably your best best. |> |> Note that the "standard" simpson's rule is >exact< for polynomials up to |> 3rd-power, so fitting a cubic spline is a waste of time - the cubic |> polynomial fit is "built into" the numerical integration algorithm |> already. this is not completely correct. think what simpson does: it interpolates three consecutive points by a parabola, integrates this exactly and sums up. by hazard, if these three points are from a cubic, it integrates this cubic exact (because of symmetry of weights and nodes with respect to the midpoint of the interval). Now, what does the questioners code? it interpolates the data globally and obtains a cubic b e t w e e n a n y t w o grid points, evaluates this piecewise cubic on a refined grid with the half stepsize and integrates this one. If the data are indeed smooth, the order of the error is O(h^4) in both cases. But... assume that his data are subject to some (hopefully small) errors. then he can use a smoothing spline , do exactly the same thing and will get a much more meaningful result than simply applying Simpsons rule to the raw data. His question, whether there exists some "better" method is hard to answer. In principle one can use either piecewise integration by higher order Newton-Cotes formulae or interpolation of an interpolating spline of higher order (no problem to compute such) or smoothing splines of higher order (also no problem in principle, but are there ready to use codes out there say for a fifth or seventh degree smoothing spline?). But all this makes sense only if the errors in the data are very small, best zero, a n d t h e h i g h e r d e r i v a t i v e s of the function underlying all this growth slower in magnitude for order k than (1/h)^k, his grid size. For smooth data and high precision arithmetic, one could decide that on the basis of the higher order divided differences of the data, but for data subject to some noise this makes no sense. hope that helps peter