From: "Michal Kvasnicka" Subject: Re: Approximations Date: Sat, 26 Aug 2000 19:07:00 +0200 Newsgroups: sci.math.num-analysis,sci.math Summary: [missing] Dear Dave and Paul, the Gerlach scheme has a better numerical (round-off error, etc.) and analytical (in some condition Gerlach's iteration step may represent an exact (not numerical !!!) solution of the solved problem) properties than classic Newton's scheme. So, in the case of high precision computation the Gerlach's scheme significantly outperform classic Newton scheme. See, for more information basic Gerlach's paper: Gerlach J 1994, SIAM Review, Vol. 36,2,272-276, June. Both of you are right, that number of operation for each Gerlach's iteration step is greater than for Newton's step, but the final #operation/#iteration balance is not so terrible as is mentioned in your replays. Michal ============================================================================== From: "Michal Kvasnicka" Subject: Re: Approximations Date: Fri, 25 Aug 2000 18:15:57 +0200 Newsgroups: sci.math.num-analysis,sci.math So, there is additional suggestion about Gerlach's iterative step code generation. A) Standard Gerlach's iterative step: x_= x-1/2*(x^2-a)*(3*x^2+a)*(3*x^6+27*a*x^4+33*a^2*x^2+a^3)/(x*(x^4+10*a*x^2+5*a ^2)*(5*x^4+10*a*x^2+a^2)) 10*additions+39*multiplications+3*divisions B) Optimized Gerlach's iterative step: t1 = x^2, t6 = t1^2, t11 = a^2, t19 = 10*a*t1, x_ = x-1/2*(t1-a)*(3*t1+a)*(3*t6*t1+27*a*t6+33*t11*t1+t11*a)/(x*(t6+t19+5*t11)*(5 *t6+t19+t11)) 18*multiplications+5*assignments+10*additions+3*divisions You can see, that Optimized Gerlach's iterative step is approximately twice faster than non-optimized version. I think this scheme may be useful for your needs. Regards, Michal -- Michal Kvasnicka ERA a.s. (Praha) Podebradska 186/56, 180 66 Praha 9, Czech Republic E-mail: m.kvasnicka@era.cz WWW: www.era.cz Phone: +420-2-66107 441 Fax: +420-2-81868025 ============================================================================== From: rusin@vesuvius.math.niu.edu (Dave Rusin) Subject: Re: Approximations Date: 25 Aug 2000 23:14:58 GMT Newsgroups: sci.math.num-analysis,sci.math [About numerical evaluations of square roots] In article <8o63du$ach$1@news.vol.cz>, Michal Kvasnicka wrote: >More suitable than previously suggested scheme is Gerlach's scheme (possible >extension of the classic Newton method) for square roots. The convergence is >"decadic" , meaning that the number of good digits expands tenfold on each >iteration. > >Initialize: >x := a/2 > >Then as we iterate: >x := x - >((x^2-a)*(3*x^2+a)*(3*x^6+27*a*x^4+33*a^2*x^2+a^3))/(2*x*(x^4+10*a*x^2+5*a^2 >)*(5*x^4+10*a*x^2+a^2)) > >we find thet, EXTREMELY rapidly: >x -> sqrt(a) Impressive. But I'm not sure I see the point. We know that the iteration x -> (x+a/x)/2 uses a rational function with numerator of degree 2 to accomplish roughly a doubling of the number of correct digits. Iterating this, we obtain rational functions x -> (x^4+6*a*x^2+a^2)/x/(x^2+a)/4 x -> (x^8+28*x^6*a+70*a^2*x^4+28*x^2*a^3+a^4)/x/(x^2+a)/(x^4+6*a*x^2+a^2)/8 x -> (x^16+120*x^14*a+1820*x^12*a^2+8008*x^10*a^3+12870*a^4*x^8+ 8008*x^6*a^5+1820*x^4*a^6+120*x^2*a^7+a^8)/x/(x^2+a)/ (x^4+6*a*x^2+a^2)/(x^8+28*x^6*a+70*a^2*x^4+28*x^2*a^3+a^4)/16 and so on, which roughly multiply the number of correct digits by the degree of the numerators (4, 8, 16). The posted formula x -> x - ((x^2-a)*(3*x^2+a)*(3*x^6+27*a*x^4+33*a^2*x^2+a^3))/ (2*x*(x^4+10*a*x^2+5*a^2)*(5*x^4+10*a*x^2+a^2)) follows the same pattern, but is it worth writing out separately? If the degree-10 version actually required a smaller flop count (say) to achieve N-digit accuracy, wouldn't the degree-16 version be even better? I don't know much about doing computation for a living, but it certainly seems to me the "much accuracy with each iteration" is a bit of a sham, since each iteration is more or less tantamount to performing multiple iterations of something simpler. I would be delighted to learn otherwise. I inquire because I have similarly seen more convoluted formulae proposed as replacements to Newton's method for other problems besides X^2-a=0. One has to ask whether there is value to doing half as many iterations of a process which is twice as costly to carry out. (BTW, the iteration x -> (x+a/x)/2 certainly has the advantage that it may be coded more compactly and will less chance of programmer error. And IIRC it already produces provably optimal answers starting with a=floor(sqrt(a)), in that all subsequent iterates are among the continued-fraction approximants to sqrt(a). ) dave