From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Newsgroups: sci.math.num-analysis Subject: Re: Finding material about Runge-Kutta methods? Date: 30 Nov 1998 18:29:22 GMT In article <73ui1v$dai$1@news-inn.inet.tele.dk>, "Kasper Vibe Grevsen" <"remove"kaspervg@post6.tele.dk> writes: |> >Kasper Vibe Grevsen wrote: |> >> snip |> I thought so due to the formulae my material just didn't explain it. |> I still have a bit of a problem however, since two books claim the maximal |> error (dealing with fourth order Runge-Kutta) is proportional to h^5 where h |> is the step. Contrary to this another book claims |> abs(e) is less or equal to an arbitrary constant L times h^4. This would be |> similar to the Simpson rule, but is it correct? h^5 is the order of the local error, the error made in one step, if y(t) is exact and you compute y(t+h) approx. h^4 is then the order of the error for obtaining y(T) for some T >t0, T fixed, as h->0. Observe that you have with h=(T-t0)/N N steps to compute, and N-> infinity as h->0. Since N=(T-t0)/h the erros sum up to order h^4. hope this helps peter ============================================================================== From: Jean-Philippe Grivet Newsgroups: sci.math.num-analysis Subject: Re: Finding material about Runge-Kutta methods? Date: Tue, 01 Dec 1998 11:47:49 +0100 Peter Spellucci wrote: > > In article <73s10l$2mso$1@news-inn.inet.tele.dk>, > "Kasper Vibe Grevsen" <"remove"kaspervg@post6.tele.dk> writes: > |> Hi I am about to write an exercise in Numerical Methods for solving the > |> differential equation > |> dy/dx=G(x,y). > |> My problem is that Danish libraries does not have very much material about > |> this topic. > |> The material I've got only claims: "It is difficult to discuss the theory > |> behind Runge-Kutta methods" so... > |> If you know about some good resources on the net (or some good books) I > |> would appreciate it highly if you tell me about them. > |> > any book about numerical analysis will have at least a basic account of this. > a very good source is > Hairer&Norsett&Wanner: solving ordinary differential equations I > (now in the 3. edition, Springer). you write from Hi! Hairer and Wanner have a nice set of lecture notes (in French) on the NET, at: http://www.unige.ch/math/folks/hairer/polycop.html Regards, JPG -- **************************************************** Jean-Philippe GRIVET * Centre de Biophysique Moleculaire * rue Charles-Sadron, 45071 Orleans cedex 02, France * Telephone: 02 38 25 76 32 * Telecopie: 02 38 63 15 17 * e-mail: grivet@cnrs-orleans.fr * **************************************************** ============================================================================== From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Newsgroups: sci.math.num-analysis Subject: Re: Finding material about Runge-Kutta methods? Date: 30 Nov 1998 11:59:34 GMT In article <73s10l$2mso$1@news-inn.inet.tele.dk>, "Kasper Vibe Grevsen" <"remove"kaspervg@post6.tele.dk> writes: |> Hi I am about to write an exercise in Numerical Methods for solving the |> differential equation |> dy/dx=G(x,y). |> My problem is that Danish libraries does not have very much material about |> this topic. |> The material I've got only claims: "It is difficult to discuss the theory |> behind Runge-Kutta methods" so... |> If you know about some good resources on the net (or some good books) I |> would appreciate it highly if you tell me about them. |> any book about numerical analysis will have at least a basic account of this. a very good source is Hairer&Norsett&Wanner: solving ordinary differential equations I (now in the 3. edition, Springer). you write from a high school and if this is a high school in the usual sense, then the topic may be indeed not easily accessible for you, since somewhat higher calculus is involved. i will try to give you the basic idea: you know that y(t+h)=y(t)+ \int_t^{t+h} y'(s)ds. and from the differential equation that y'(s)=G(s,y(s)). Replace the integral by a quadrature formula \int_t^{t+h} G(s,y(s)) ds approx = h* sum_{i=1}^m \gamma_i G(s_i,y(s_i)) . the gamma_i are the so called "weights' of the quadrature formula and the s_i the "nodes". an example is the Simpson rule with gamma = (1/6, 2/3, 1/6) s = t, t+h/2 , t+h . m is the so called number of "stages" of the method. Now, inserting this in the above sum, you come into trouble: there appear values y(s_i), which you don't know yet (right of the current value t). Now comes the trick you write y(s_i) = y(t)+ \int_t^{s_i} y'(v)dv, and for the integral, you use an "unusual" quadrature formula that uses new weights, but the _ s a m e _ nodes as before, that is you write y(s_i) approx = y(t) + h* sum_{j=1}^m \beta_{i,j} G(s_j,y(s_j)) where \beta{i,j} are now the weights. Obviously, to be reasonable, you will have h* sum_{j=1}^m \beta{i,j} = (s_i-t), i=1,..m (the s_i are usually written as t+\alpha_i*h, with the \alpha_i tabulated, for simplicity of typesetting with this damned editor i avoided this) If you write down all this, you come up with a (in general nonlinear) system with m equations for the m unknowns y(s_1), ..,y(s_m). this system becomes simple if s_1=t, i.e. alpha_1=0 and \beta_{i,j}=0 for j>=i, because then you have a sequence the m explicit evalutions of G. these methods are called the explicit Runge-Kutta methods but there are very useful implicit methods (yes, it is true, you must then solve a nonlinear system for every time step.) It is allowed to use the same value for alpha_i more than once, and the classical Runge-Kutta formula of order 4 does this with the middle node, which is used twice. Now there remains the question how to determine the \alpha_i, \gamma_i , and \beta_{i,j}. given y(t) as exact, name the approximation for y(t+h) by yh(t+h). the "old" methods were explicit and the determination of the coefficients aimed in maximizing the "order" , that is the power p in the expression || y(t+h)-yh(t+h)|| <= const(T) h^p, where the const depends only on G and T, for t0<= t <= T with t0,y0 the given initial point. In principle this expression is obtained by a Taylor-expansion of the expression above (very cumbersome, if done naively). It was Butcher who invented the structure of "Butcher trees" for describing the partial differentials of G which are connected with the different powers of h and the coefficients in order to have a systematic way for doing this development. But nevertheless this is somewhat involved, but very well described in the book of Haiere et. al. hope this helps peter