From: rusin@vesuvius.math.niu.edu (Dave Rusin) Newsgroups: sci.math.symbolic Subject: Re: Solving = C Date: 24 Feb 1998 05:12:02 GMT In article , Ronald Bruck wrote: >Suppose I have n variables x[1], x[2], ..., x[n], and I have a family of >quadratic equations > > sum_{i,j}(a[p,i,j] x[i] x[j]) + sum_i[b[p,i] x[i]] = c[p] > >where the a's, b's and c's are integers. Conceptually, this is easy to >solve, exploiting the special structure There is nothing special here: any variety over the rationals can be specified by a collection of quadratic polynomials (e.g. xyz=w^5 is equivalent to {xy=v,w^2=u, u^2=t, vz=tw} ). >but Mathematica and Maple seem to >want to use generic Groebner-basis methods, and take excruciatingly long. I have reluctantly come to side with those who have argued that the mathematical public was over-sold on Grobner bases. >I want EXACT solutions (which will obviously involve roots of polynomials >of very high degree), NOT numeric solutions. Does anybody have >suggestions about special-purpose packages? I'm willing to use Macsyma, >Mathematica or Maple, whatever works. One suggestion, if you have actual values for the a's, b's, and c's: if you can give a reasonable upper bound on the degree of the x[i] over Q (worst-case bounds can be provided by resultants) then if numeric solutions _can_ be obtained, one may fairly easily determine the minimal polynomial of the x[i] over Q. Of course if the minimal polynomial is of great height you will need to compute those numerical values to great accuracy. Thus this method is to be recommended only if you expect one or more of the x[i] to be roots of "small" polynomials. For computing the minimal polynomial of an algebraic real, see "LLL algorithm", index/11YXX.html Of course you can compute the real points with Newton's method, say, if you have any idea where the roots are, approximately. dave ============================================================================== Date: Tue, 24 Feb 1998 07:59:37 -0800 (PST) From: "Richard J. Fateman" To: rusin@vesuvius.math.niu.edu Subject: Re: Solving = C Newsgroups: sci.math.symbolic In article <6ctkr3$sed$1@gannett.math.niu.edu> you write: >In article , >Ronald Bruck wrote: > >I have reluctantly come to side with those who have argued that the >mathematical public was over-sold on Grobner bases. > I know I've been skeptical, but is there a large body of critics? My current view is that there are probably a few pools of "easier than expected" problems that occur in real situations, in which case Grobner basis calculations can surprise you. But mostly they should be used only as a last resort.. -- Richard J. Fateman fateman@cs.berkeley.edu http://http.cs.berkeley.edu/~fateman/ ============================================================================== Date: Tue, 24 Feb 1998 11:53:55 -0600 (CST) From: Dave Rusin To: fateman@CS.Berkeley.EDU Subject: Re: Solving = C >I know I've been skeptical, but is there a large body of critics? Large? Well, not that I'm aware of. But I had a couple of email exchanges with people on related questions; it seems that when the going gets tough, the tough turn to resultants (!). One person pointed me to a set of papers in robotics/control theory, in which the locus of an object's motion was described as a variety. Elimination via Grobner bases led to the usual intermediate term swell and the inability to compute solutions in real time. I take it the way out in their case was to keep the number of polynomialss minimal (that is, to deal only with varieties which are complete intersections) by treating only the real locus; I guess after the fact you can reconstruct the actual integral polynomials. I can look for their email in my files if you need to pursue this. I also saved some posts regarding efficient elimination: index/13PXX.html dave ============================================================================== Date: Tue, 24 Feb 1998 10:22:39 -0800 (PST) From: "Richard J. Fateman" To: rusin@math.niu.edu Subject: Re: Solving = C sounds like work by Canny and his students.