From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: Root of a function algorithm Date: 22 Apr 1999 12:02:12 GMT Newsgroups: sci.math.num-analysis Keywords: Runge-Kutta, Jenkins-Traub, etc. In article <371d141c$0$230@nntp1.ba.best.com>, Alexander Poquet writes: |> Hi, I'm a student writing a program to plot implicit |> relations in 2space (eg, x^2+y^2=4). My current |> angle of attack on the problem is to pick a variable |> and hold it constant, thereby making it a func in 1 |> var, pop it up a dimension and calculate its root. not a good idea. first have a look at systems like Mathematica. It is the possibility to plot implicitly defined functions built in. Maybe the mathematics behind can be found in the documentation. should be some method like continuation. |> i hear most often mentioned are the 'jenkins-traub' |> algorithm and the eigenvalue method. |> btw, i am dealing with mostly polynomials, so if you |> know some ultra efficient method that works only for this exactly is jenkins traub. in crude terms, it is the eigenvalue - QR -method applied to the companion matrix of your polynomial (the frobenius-matrix whose characteristic polynomial is the given one). software : toms 493 (http://www.netlib.org -> browse repository -> select toms -> select 493) |> ps. unrelated, but how does the runge kutta method |> work for solving diff eqs? always wanted to know |> that. |> that is easy to explain: you have y' = f(t,y) , y(t0)=y0. hence y(t+h)= y(t) = \int_t^{t+h} f(s,y(s))ds. for a moment, let g(s)=f(s,y(s)) (imagine you know y(s)) then \int_t^{t+h} g(s)ds approx= \sum_i=1^m w_i g(s_i) (a quadratur formula like simpsons rule), where w_i are the weigths and s_i the nodes. s_i are of the form t+\alpha_i*h with \alpha_i in [0,1]. Multiple \alpha_i's will be allowed in the following, although at first look these seems strange) now g(t+\alpha_i*h)=f(t+\alpha_i*h,y(t+\alpha_i*h)) and the y-values are unfortunately unknown if \alpha_i>0. but name these g-values k_i (they are unknown) Now comes the trick: y(t+\alpha_i*h) = y(t) + \int_t^{t+\alpha_i*h} g(s)ds and we express this new integral by a quadratur rule which uses different weights, but the same nodes as the original one, i.e. we write \int_t^{t+\alpha_i*h} g(s)ds approx= h*\sum_{j=1}^m \beta_{i,j} k_j this gives a system of m equations for the m unknowns k_1,..k_m: k_i = f(t+\alpha_i*h, y(t)+h*\sum_{j=1}^, \beta_{i,j} k_j) i=1,..,m if the \beta_{i,j} fulfill \beta_{i,j}=0 for j>= i and \alpha_1=0 then k_1 = f(t,y(t)) and you can compute any further k_i in sequence. this is the case for the"classical" rk formula with \alpha = 0,0.5,0.5,1, \beta = 0,0,0,0 1/2,0,0,0, 0,1/2,0,0 0,0,1,0, w_i = 1/6,1/3,1/3,1/6. otherwise you must solve the system for the k_i (a nonlinear one, if f is nonlinear in y ) and proceed. (where remains to explain a lot more things, like to select h and and .. ) hope this helps peter