From: spellucci@mathematik.th-darmstadt.de (Peter Spellucci) Newsgroups: sci.math.num-analysis Subject: Re: Usawa method Date: 27 Mar 1998 12:56:33 GMT In article <351A2214.92928AE4@geocities.com>, Joel Guerrero writes: |> Does anybody knows something about the usawa method?. |> For what I had read this an alternative method for the penalty method in |> FEM. |> |> Joel Guerrero the problem to be solved: f(x) = min x in X, X Hilbert, f : X -> R subject to h(x)=0 , h: X -> W, W Hilbert, such that ||.|| is given by an inner product. assumptions: f strictly convex, h affine linear, surjective. let L(x,mu)=f(x)- (linear in mu, strictly convex in x) for given mu define x(mu)=argmin(L(x,mu)) Let Phi(mu)=L(x(mu),mu) then the solution of the original problem is x(mu*) where mu* solves mu* = argmax (Phi(mu)) Method: given mu_k, find x(mu_k) (at least approximately) mu_{k+1} = mu_{k} + t_k*grad (Phi(mu_k)), t_k sufficiently small this is Uzawa's method, a predecessor of the augmented Lagrangian method, which surely is better (replace L(x,mu) by L(x,mu)+c*||h(x)||**2 for some c>0, fixed. yes, _some_. Since the problem is convex, the method converges for _any_ c>0. but fast only for sufficiently large, not too large c) hope this helps peter ============================================================================== From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Newsgroups: sci.math.num-analysis Subject: Re: Usawa's algorithm for saddle points Date: 14 Sep 1998 19:03:28 GMT In article <35FD183B.B1890F34@paris.sgi.com>, Jean-Pierre Panziera writes: |> Anyone heard of such a method ? |> Where can I find software for this method ? it solves convex minimization problems of the kind f(x)=min, g(x)<=0, f convex scalar, g=(g_1,.,g_m) g_i convex scalar by solving the saddle point problem for the lagrangian function. roughly speaking it does gradient minimization steps and gradient maximization steps in turn, slow and inefficient. hence no software for it available. there is a special case were f is quadratic and the g_i are linear equations. bank & welfert & yserentant have published a paper on this case and also bramble & pasciak. there should be code available at the free fem resources: http://www.engr.usask.ca/~macphed/finite/fe_resources/fe_resources.html now to the general case: it is the predecessor of what is now known as "augmented lagrangian" methods which are quite successful for _convex_ optimization problems. e.g. LANCELOT is one outstanding solution in this direction. might be somewhat out of your considerations. but if you have a good unconstrained optimizer available, you should be able to construct a reasonable code yourself using it for the minimization phase for the augmented lagrangian and the usual multiplier udate rules for the outer maximization. if your problem is convex, the choice of the penalty parameter is not so tricky. have a look at gill-murray-wright "practical methods of optimization" academic press 1980, for a first and readable introduction. hope this helps peter