From: David Wilkinson Subject: Re: numerical integration methods Date: Tue, 25 May 1999 08:52:27 +0100 Newsgroups: sci.math.num-analysis Keywords: General comparison of numerical integration methods Jeremy You are thinking on the right lines. If you have a choice of x values at which to evaluate the integrand then Gauss gives higher accuracy than Simpson or Newton-Coates. Over a larger interval a piecewise Gauss can be used. However, the trouble with Gauss is that you have no real idea of how accurate the answer is. You can always increase the accuracy by using a higher order Gauss method or by applying it piecewise over smaller periods but you still do not know the accuracy in terms of correct decimal places. To get a prescribed accuracy you should look at Romberg's method or Adaptive integration, both of which keep reducing the step size until a specified error has been achieved. Romberg uses the same spacing throughout the range while adaptive integration only reduces the spacing where it needs to and is therefore more efficient in principle. However, both methods can be caught out sometimes by oscillating integrands. All these methods have their pros and cons and the best choice depends on the form of the integrand, the accuracy required and your ability to choose the x values. Other complications include singularities, where the integrand tends to +/- infinity, and ranges of integration that extend to infinity. Always plot out the integrand and look at it to see if the method used is sensible. You are fitting a polynomial through the calculated points; is it likely to be a good fit or do you need a smaller point spacing? Otherwise it is best to program all the methods and try them all. It is reassuring if they all agree! It is also worth comparing with MathCad, Matlab, Mathematica etc. to see if they get the same answer. When you have sorted out the method that is the most efficient for the accuracy you want for your application then you can use it for other similar cases. In article <19990524164840.13213.00004555@ng-fk1.aol.com>, Jemfinch02 writes >>> In everyone's opinion here, which method of numerical integration involves >>the >>> best accuracy to time ratio, and how would that method be coded? > >I've just finished AP calculus BC (calc 1 and 2 in college terms) and so my >knowledge of math is *very* simplistic compared to the people here. I am >trying both to extend my math knowledge and my programming capability by >programming a really fast numerical integration program on my HP48. > >>Monte Carlo may be >>best > >I don't know what Monte Carlo is. > >I have already programmed Newton Cote's formulas with 3 points and 5 points >(Simpson's and Bode's) and a 2 point Gaussian Quadrature program. I'm >interested a lot more in Gaussian quadrature since it is fully accurate for >equations of 2N-1 degrees, while Newton-Cotes is only accurate for N-1 degree >equations. > >I would be interested in programming some method other than Gaussian Quadrature >if it would be faster and more accurate. > >Thanks for all the help so far, >Jeremy >------------------------------------ >If i ever forget to capitalize a proper noun, forgive me. i have ee cummings >in my ancestry. >My ICQ # is 28153190. My AIM/AOL name is either jemfinch02 or Cassius80. >Have a good day, and good luck in your endeavors! -- David Wilkinson ============================================================================== From: Art Werschulz Subject: Re: numerical integration Date: 08 Dec 1999 10:19:31 -0500 Newsgroups: sci.math.num-analysis Hi. In response to the question > What is the most accurate method to integrate a function? "Dann Corbit" writes: > In general, Gaussian integration is "provably optimal" for smooth, > continuous functions. Be careful here. If the optimality criterion is that the quadrature rule should integrate polynomials of as high a degree as possible, then Gaussian quadrature is optimal. That is, n-point Gaussian quadrature can exactly integrate polynomials of degree 2n-1; moreover, there is no linear n-point method that can exactly integrate all polynomials of degree 2n. However, if the optimality criterion is either achieving minimal error for a given number of function evaluations or computing an approximation having a given error bound, at minimal cost then Dann's other comment should be taken into account: > This one does not really have an answer. Is the function smooth and > continuous? Is the interval of integration finite or infinite? Are there > asymptotes? Is the location of the asymptotes known? The more you know > about the original problem, the better the answer can be. Questions to be considered include: (1) What is the class of integrands? (2) How is error measured (worst-case, average-case, ...)? To be specific, suppose that we are using a worst-case measure, and the class of integrands is the unit ball of a function space. If the function space consists of periodic functions for which the r-th derivative is in L_infinity and the (r-1)st derivative is absolutely continuous, then a composite midpoint rule is optimal, no matter what the value of r is. (Its error, however, *does* depend on r.) If the function space consists of functions analytic on a complex disk (centered at z=1/2) containing the unit interval, then (1) Gaussian quadrature is optimal if the radius of the disk is greater than 1/2. (2) Gaussian quadrature is decidedly non-optimal if the radius of the disk is 1/2. -- Art Werschulz (8-{)} "Metaphors be with you." -- bumper sticker GCS/M (GAT): d? -p+ c++ l u+(-) e--- m* s n+ h f g+ w+ t++ r- y? Internet: agw@cs.columbia.eduWWW ATTnet: Columbia U. (212) 939-7061, Fordham U. (212) 636-6325 ============================================================================== From: hrubin@odds.stat.purdue.edu (Herman Rubin) Subject: Re: numerical integration Date: 10 Dec 1999 07:40:03 -0500 Newsgroups: sci.math.num-analysis In article , Art Werschulz wrote: >Hi. >In response to the question >> What is the most accurate method to integrate a function? >"Dann Corbit" writes: >> In general, Gaussian integration is "provably optimal" for smooth, >> continuous functions. >Be careful here. If the optimality criterion is that the quadrature >rule should integrate polynomials of as high a degree as possible, >then Gaussian quadrature is optimal. That is, n-point Gaussian >quadrature can exactly integrate polynomials of degree 2n-1; moreover, >there is no linear n-point method that can exactly integrate all >polynomials of degree 2n. Be even more careful; a function can even be entire and not too suitable for Gaussian quadrature. One is not likely to use 10^10 points, so asymptotic arguments are inadequate. Let me give you one example, which occurred in our department. It was desired to compute \int (F(x-a))^n f(x) dx, where f is the standard normal density, and F is the cumulative, the integral of f. The 32-point Gauss-Hermite coefficients being available, which would give exact answers for all polynomials of degree 63, this was tested for a=0, for which the integral is clearly 1/(n+1). The results were not good. The reason was not too hard to see; F^n, for moderate n, makes a fairly rapid change from near 0 to near 1. This meant that, over the effective region, the degree was more like 5, and outside the region, this polynomial did not fit. Piecing together Simpson's rules did a good job. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558 ============================================================================== From: hrubin@odds.stat.purdue.edu (Herman Rubin) Subject: Re: numerical integration Date: 9 Dec 1999 13:34:09 -0500 Newsgroups: sci.math.num-analysis In article <82m36o$467$1@merope.saaf.se>, Paul Schlyter wrote: >In article , >Dann Corbit wrote: >> Khalid Al-Qurashi wrote in message >> news:384DACB8.74CACB36@leeds.ac.uk... >>> What is the most accurate method to integrate a function? >> This one does not really have an answer. >Actually, it does have an answer: the most accurate method >is to integrate it analytically, since that method, when it can >be used, provides an exact answer. >If the function cannot be integrated analytically, one might be able >to use the second most most accurate method: expand the function in >some infinite series, and then integrate all terms of the series. >The result will be another infinite series, which can be truncated at >any point, to obtain arbitrary accuracy. You may have to break up the range, and do this separately in each part, or use some other expansion. >Only if you limit yourself to numerical integration, there's >no single method of integration which works best in all situations. Numerical analysis is an art, not a science. One method which I have used, but have not seen in the literature, is to use a rational approximation, which is then integrated analytically. Most of the methods in the literature are to use local or global linear fits and integrate them, which corresponds to linear combinations of functional values. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558 ============================================================================== From: Tanase Costin Subject: Re: numerical integration Date: Thu, 9 Dec 1999 14:17:47 -0500 Newsgroups: sci.math.num-analysis To: Herman Rubin Hi, On 9 Dec 1999, Herman Rubin wrote: > In article <82m36o$467$1@merope.saaf.se>, > Paul Schlyter wrote: > >In article , > >Dann Corbit wrote: > > >> Khalid Al-Qurashi wrote in message > >> news:384DACB8.74CACB36@leeds.ac.uk... > > >>> What is the most accurate method to integrate a function? > Numerical analysis is an art, not a science. One method > which I have used, but have not seen in the literature, > is to use a rational approximation, which is then > integrated analytically. I have also used Rational Interpolation and Rational Approximation, but it seems to me that the following happens: 1. if the function is enough "smooth", integration of the RA works 2. If the function to be integrated behaves like square root, or exponential or anything that has somewhere in the complex plane non-poles singularities the RA integration is (in my cases) worst than a simple Gaussian... From my (not big) experience I would say that splitting the integral, _making proper variable changes to smooth the integrand_(that is the most important thing) gives 12 -16 digits of accuracy with less than 64 points of integration. And, sure, making proper change of variables can be as easy as converting Integral( 1/sqrt(x) f(x) , {x, 0, 1} ) in Integral( 2 f(t*t), {t, 0, 1}) or as complicated as the rotation of integration path in the complex plane is. Costin ============================================================================== From: hrubin@odds.stat.purdue.edu (Herman Rubin) Subject: Re: numerical integration Date: 10 Dec 1999 07:28:48 -0500 Newsgroups: sci.math.num-analysis In article , Dann Corbit wrote: >Herman Rubin wrote in message >news:82pndt$5j1m@odds.stat.purdue.edu... >> In article <82p8ds$7nk$1@joe.rice.edu>, >> Bruce Johnson wrote: >> >Herman Rubin wrote: >> <> Numerical analysis is an art, not a science. One method >> <> which I have used, but have not seen in the literature, >> <> is to use a rational approximation, which is then >> <> integrated analytically. Most of the methods in the >> <> literature are to use local or global linear fits and >> <> integrate them, which corresponds to linear combinations >> <> of functional values. >> >I have seen one reference that may have been related, but it >> >would take a certain amount of library-digging to find the >> >particular numerical analysis text that mentioned it. In >> >addition to the usual Lagrange polynomial interpolation >> >method, some variation using rational approximants was >> >discussed. Most of the text books discuss the relationship >> >between the polynomial interpolation and numerical quadrature >> >approximation of integrals; presumably a similar development >> >would apply in the rational approximant case. >> It is not at all unusual for numerical analysis texts to >> discuss rational approximation. But I have not seen any >> which use this as a basis for numerical integration; all >> the methods I have seen are essentially locally linear. >> However, for many of the methods, the polynomial basis >> seems to be downplayed. >I am curious about your rational approximations. >Do you create a Pade approximant, and then integrate the resulting ratio of >polynomials? >Do you have any error analysis for this method? One could use any of the "standard" methods of rational approximation. The Pade approximation is really a special case of the method of Thiele, which provides a means of obtaining a continued fraction similar to the Lagrange interpolation formula. One can, with difficulty, obtain the best rational approximation. It is much harder to get error bounds, as the approach to poles is important. There are greater computational difficulties with rational approximation than with piecewise polynomial approximations, which are the usual methods. The computations are harder, and may even become ill conditioned. This also happens in polynomial approximations, but the process of integration can remove the problem; Newton-Cotes procedures with enough points introduce negative weights. Integration of rational functions usually requires the computation of partial fractions and logarithms. I repeat, numerical analysis is an art. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558