From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: Roots-Finding-Methods Date: 4 Jan 2000 13:52:17 -0500 Newsgroups: comp.lang.fortran,sci.math.num-analysis Summary: [missing] In article <3871ddc2.14254034@news.et.tudelft.nl>, Dany Derijken wrote: : : :Dear All Tutors and Gurus, : :I am very sory if there is a Faq similar to this ! : :I have to find all the roots of a real polynomials :a0 +a1x^1 +a2x^2 +....+anx^n=0 :for example: :n=6, :a0=1,a1=2,a2=3,a3=4,a4=5,a5=6,a6=3 : :There are many different methods can be used to :solve this problem... :It is extremely difficult for me, a telecom-student, to :find out which method is fast, accurate and reliable.. : : Would you please to take a little time and guide me :1) to select the best method in solving the above problem. :2) and, Where can I find a Free code of that method. ? :3) in the above test case the ROOTS of Matlab gives for :a=3 : x= : 0.60500033370606 + 1.16877089448037i : 0.60500033370606 - 1.16877089448037i : -0.60500033370606 + 1.16877089448037i : -0.60500033370606 - 1.16877089448037i : -1.00000000000000 + 0.00000002176468i : -1.00000000000000 - 0.00000002176468i :No real root, there are 3 pair complex conjugate roots : :a=2.99999999999999 :x= : 0.60500033370605 + 1.16877089448037i : 0.60500033370605 - 1.16877089448037i : -0.60500033370606 + 1.16877089448037i : -0.60500033370606 - 1.16877089448037i : -1.00000004160362 : -0.99999995839638 :Two real roots, and 2 pair complex conjugate roots, :I wonder whether the Fortran version of this method is :available some where.... :Have you any ideas, Please give me a hand. :I would greatlly appreciate for any help in this question, :Thank you very much for your time, : :Dany : :NB: I have found a program in numerical recipes. :Does the algorithm used in this book the best One ? : First: MATLAB responds to commands >>p=[3 6 5 4 3 2 1];factor(poly2sym(p)) with ans = (3*x^4+2*x^2+1)*(x+1)^2 You made a common mistake - listing the coefficients in reverse order. You calculated the roots of the polynomial with coefficients [1 2 3 4 5 6 3] instead. The MATLAB routine was written with the assumption that the powers in the polynomial are listed in descending order. The roots you found are reciprocals of what you might have planned them to be. The properties of polynomials with repeated roots are well-studied, and the outcome as observed by you is no surprise. Under small perturbations, say of size c, in the coefficients, the root of multiplicity m (m is 2 in your case) spreads typically into m distinct roots, asymptotically at a distance (constant)*c^(1/m) . In particular, simple roots move away approximately proportionally to the size of the perturbation, at least for some low level of perturbation. If you divide out the factor (x+1)^2 from the unperturbed polynomial, you get (taking the reversal into account) roots of x^4 + 2 * x^2 + 3, and MATLAB gives -0.60500033370606 + 1.16877089448037i -0.60500033370606 - 1.16877089448037i 0.60500033370606 + 1.16877089448037i 0.60500033370606 - 1.16877089448037i (good to the last digit!) Good discussion can be found in John Wilkinson's classic "Rounding Errors in ..." (I do not have it right with me, but library search will help). Actually, Wilkinson presents an example where the roots look well-separated to the human eye ("can there be anything better separated than equally spaced?") but from the perturbation analysis point of view, they are clustered and the change from real roots to complex conjugate pairs is dramatic. So again, the problem is in the problem, not in the method; the method does the best it can under the limitations of machine accuracy. With multiple or clustered roots and small perturbations, you can fool most of these routines. Sometimes polynomial rootfinding suffers from the user's unintentional "destabilization": my favorite example is (modified from Conte-DeBoor's textbook): find the roots of x^7 - 7*x^6 + 21*x^5 - 35*x^4 + 35*x^3 - 21*x^2 + 7*x - 1 The plot of the function near x=1 shows dozens of false sign changes, although the polynomial should have no more than 7 roots. MATLAB's "roots" routine returns 1.00609591491435 + 0.00291762675386i 1.00609591491435 - 0.00291762675386i 1.00154356667825 + 0.00660599221344i 1.00154356667825 - 0.00660599221344i 0.99577294367089 + 0.00534306983874i 0.99577294367089 - 0.00534306983874i 0.99317514947301 distributed close to vertices of a regular heptagon, just as theory predicted, and it is not MATLAB's fault. (What piece of theory? Weierstrass's Preparation Theorem from Complex Analysis.) Of course the polynomial is (x - 1)^7 and has root 1 of multiplicity 7. The expansion in powers of x destroyed the perfect correlation between the coefficients; the routine treated its coefficient vector as any other vector, integer or not. Good luck, ZVK(Slavek). ============================================================================== From: "Dann Corbit" Subject: Re: Roots-Finding-Methods Date: Tue, 4 Jan 2000 10:01:55 -0800 Newsgroups: comp.lang.fortran,sci.math.num-analysis Look here: http://www.elsevier.com/homepage/sac/cam/mcnamee/ -- C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html "The C-FAQ Book" ISBN 0-201-84519-9 C.A.P. Newsgroup http://www.dejanews.com/~c_a_p C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm [same Original Article quoted as above --djr] ============================================================================== From: "Rafael R. Sevilla" Subject: Re: Roots-Finding-Methods Date: Wed, 05 Jan 2000 02:28:26 +0800 Newsgroups: sci.math.num-analysis Dany Derijken wrote: > > NB: I have found a program in numerical recipes. > Does the algorithm used in this book the best One ? No, but it is good enough for most purposes. If you're talking about the Laguerre's method code there, that is. It seems to work very well for nearly all polynomials that arise in common practice, although there are more robust algorithms such as Jenkins-Traub which are supposed to be better, though I haven't been able to find any references on them in our libraries. If you only want the real roots then Sturm sequences may be your best bet. Newton-Raphson is only good for polishing up roots once you have them at a certain tolerance. -- ------------------------------------------------------------------------ | Rafael R. Sevilla dido@pacific.net.ph | | Instrumentation, Robotics, and Control Laboratory | | College of Engineering, University of the Philippines, Diliman | ------------------------------------------------------------------------ ============================================================================== From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: Roots-Finding-Methods Date: 5 Jan 2000 17:09:46 GMT Newsgroups: comp.lang.fortran,sci.math.num-analysis In article <3871ddc2.14254034@news.et.tudelft.nl>, derijken@mailcity.com (Dany Derijken) writes: |> |> |> Dear All Tutors and Gurus, |> |> I am very sory if there is a Faq similar to this ! |> |> I have to find all the roots of a real polynomials |> a0 +a1x^1 +a2x^2 +....+anx^n=0 ...... snip (discussion of matlab run etc) for a general polynomial with possibly complex roots the jenkins traub method is surely one of the most efficient and reliable and i would bet that matlab code is based on that. you find it as "rpoly" in netlib/toms/493 http://www.netlib.org. there are more good methods (which always converge)., e.g. hiranos method or laguerres method. for some more see http://plato.la.asu.edu/guide.html -> problems -> zero hope this helps peter ============================================================================== From: Stephen Vavasis Subject: Re: Roots-Finding-Methods Date: Wed, 05 Jan 2000 12:43:32 -0500 Newsgroups: sci.math.num-analysis Peter Spellucci wrote: > > In article <3871ddc2.14254034@news.et.tudelft.nl>, > derijken@mailcity.com (Dany Derijken) writes: > |> > |> > |> Dear All Tutors and Gurus, > |> > |> I am very sory if there is a Faq similar to this ! > |> > |> I have to find all the roots of a real polynomials > |> a0 +a1x^1 +a2x^2 +....+anx^n=0 > ...... > snip (discussion of matlab run etc) > for a general polynomial with possibly complex roots the jenkins traub > method is surely one of the most efficient and reliable and i would bet > that matlab code is based on that. Matlab's root-finding algorithm forms the companion matrix of the polynomial and then computes eigenvalues of the companion matrix. This method is known to be backward stable (Edelman & Murakami; Toh & Trefethen) assuming the leading coefficient of the polynomial is not too small. See also the following recently completed paper: ftp://ftp.cs.cornell.edu/pub/vavasis/papers/onevar.ps by G. Jonsson and me, which has references to prior work. In that paper we observe a shortcoming with 'roots' when the leading coefficient of the polynomial is small. As already noted by a previous poster, the polynomial in Derijken's original posting is ill-conditioned and hence its roots are sensitive to small changes in coefficients. Thus, the fact that the roots computed by Matlab change substantially when a coefficient is perturbed does not indicate an accuracy-related problem in 'roots'. (But there is a different accuracy-related problem, as mentioned in the previous paragraph. Also, 'roots' is not necessarily the most efficient root-finding method.) -- Steve Vavasis ============================================================================== From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: Roots-Finding-Methods Date: 6 Jan 2000 14:57:57 GMT Newsgroups: sci.math.num-analysis In article <387382C4.4A1B@cs.cornell.edu>, Stephen Vavasis writes: snip (previous discussion) |> Matlab's root-finding algorithm forms the companion matrix of the |> polynomial and then computes eigenvalues of the companion matrix. This |> method is known to be backward stable (Edelman & Murakami; Toh & |> Trefethen) assuming the leading coefficient of the polynomial is not too |> small. See also the following recently completed paper: |> ftp://ftp.cs.cornell.edu/pub/vavasis/papers/onevar.ps |> by G. Jonsson and me, which has references to prior work. In that paper |> we observe a shortcoming with 'roots' when the leading coefficient of |> the polynomial is small. and Jenkins traub is nothing than the retranslation of this into the language of polynomial transformations. I hope that matlab doesn't really form the companion matrix and operate on that! The original poster didn't ask for a discussion of polynomial roots sensitivity, he asked for the software. The sensitivity problem is a (hopefully) well known area since appearance of wilkinsons book (roundoff errors ...) and the republishing of the example in the "perfidious polynomial"). One must discern between the sensitivity of the problem (if the polynomial is originally given by its mc laurin expansion and the coefficients are the original data, you have to accept that as is, the question arises whether indeed this polynomial representation is necessarily the true input data) and the stability of the algorithm. here the problem of explicit deflation occurs and you can avoid this by the newton-maehly trick. peter ============================================================================== From: "Tony T. Warnock" Subject: Re: Roots-Finding-Methods Date: Thu, 06 Jan 2000 08:06:06 -0700 Newsgroups: comp.lang.fortran,sci.math.num-analysis One thing to keep in mind is that problem of root-finding is inherently troublesome. The roots of a polynomial are continuous functions of the coefficients but not differentiable functions. Multiple roots are difficult to distinguish from roots that are very close. The polynomial with roots at 1,2,...20 is an interesting example. The last coefficient is 20!. This is about 2.4*10**18. Changing the last coefficient by one part in 10**12 gives a polynomial with several pairs of complex roots. Another point is that all continuous functions may be uniformly approximated by polynomials. This includes those that are not differentiable. It also includes Brownian motion. Finding roots of these polynomials it not easy. ============================================================================== From: "Alan Miller" Subject: cpoly & rpoly Date: Sun, 09 Jan 2000 10:29:44 GMT Newsgroups: sci.math.num-analysis,comp.lang.fortran I have uploaded versions of these two classic old CACM algorithms to my overflow web site. I have removed the machine-dependent code which caused trouble for at least one user. Overflow site: http://users.bigpond.net.au/amiller/ -- Alan Miller, Retired Scientist (Statistician) CSIRO Mathematical & Information Sciences Alan.Miller -at- vic.cmis.csiro.au http://www.ozemail.com.au/~milleraj