From: Dave Rusin Date: Tue, 5 May 1998 09:36:57 -0500 (CDT) To: fletcher@armadillo.acm.org Subject: Re: A little more help with bound-constrained nonlinear minimization Newsgroups: sci.math.num-analysis In article <01bd7608$2985ea60$da8237cf@fletch.terminalreality.com> you write: >I have an articulated skeleton, and wish to solve for interior angles (i.e. >elbow/shoulder) so that a bone on the end of the articulation (i.e. a hand) >can be placed in a certain position and/orientation. (This is Inverse >Kinematics.) Bones have 1, 2, or 3 degrees of freedom, and each of these >degrees of freedom is subject to constraints, for example to prevent the >elbow from bending backwards. I can devise a function f which tells me how >"bad" a certain configuration of angles is, and have some documentation on >how I could compute "gradients" for this function. (I actually have no >idea what a gradient is, I assumed it was similar to a derivative). The collection of possible configurations is pretty standard in e.g. robotics. Possibly some useful links in index/70-XX.html but I'm sure much more detailed information is known to those people who make robot arms etc. >So, my problem is: > >theta is an array of N angles, with constraints on the angles. f(theta) is >always positive. I wish to minimize f(theta) subject to the constraints. > >Several notes which might influence the choice of algorithm: > >1. Since I will only use this for a limited branch of the hierarchy, N is >relative small (N < 10) >2. This algorithm needs to work as quickly as possible, since it is for a >real-time application. It should be pretty easy to update the choice of variables to match a change in parameters (that is, if you need to move the hand up 6", you should be able to estimate the new optimal angles pretty easily); that's essentially what gradients are all about -- estimating incremental changes in one thing when other things change. Observe that you can in this way be led to some non-global optima. Consider the "plate trick": stand like a waiter with a tray holding a glass of water at shoulder height; lower to hip height; rotate 180 degrees; continue rotating as you raise the plate back to shoulder height (you may find it necessary to push the plate away and behind you). In this way you always seek to follow the local optimum from one configuration to the next, and yet you end up with your hand (roughly) where you started, but with all the internal angles very different from your initial optimum. >3. All I will ever really care about is the sin & cos of the angles. It is >quite possible that f is a polynomial in terms of sin(theta_i) and >cos(theta_i), although I can't guarantee that right now since I'm not sure >what sort of "tensions" on angles I want to build into f. Anyway, I >thought I'd mention that in case it sparks someone's imagination and they >know of a really nice trick in this case. Yes, that seems like it would help; if f is a differentiable function of x_i=cos(theta_i) and y_i=sin(theta_i), then you can use Lagrange multipliers: you want to minimize f(x_1,y_1,...,x_N,y_N) subject to N constraints x_i^2+y_i^2=1. If f is in fact a low-degree polynomial in these variables, it is plausible that one can get an accurate solution with techniques from computational commutative algebra, e.g. Groebner bases. index/13PXX.html I should caution that the real value of those techniques is that they allow for a global search for all optima; if you just want a search for a nearby optimum (as in the case of continuous motion) then you will find Groebner bases to be disappointingly slow and memory-hungry. >4. I can supply the algorithm with a valid "guess" configuration, since we >already have angles stored for canned animation sequences. In fact, if the >algorithm finds a decent local minimum (feasible according to the >constraints) which is relative close to this initial guess, it would >probably be acceptable, rather than insuring the absolute minimum across >the entire domain. If you need to solve F(x)=y (where x and y are variables in R^N and F is differentiable) then you can readily do so with Newton's method if you have a good enough initial guess F(x0) ~ y. (Simply replace your initial guess x0 by a new one x with x = x0 - (F'(x0))^(-1) * (F(x0)-y); iterate as often as necessary.) Sounds like a fun project. dave