From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: How to compute gradient of functional ??? Date: 24 Sep 1999 13:37:02 GMT Newsgroups: sci.math.num-analysis Keywords: numerical computation of surface minimizing a functional [65J15] In article <37EA6154.7859DC18@pi.tu-berlin.de>, Andreas Wick writes: |> Hello, |> |> I have to solve the following problem: |> |> |> Given the functional |> |> F( x , y ) = \int f( dx/ds , dx/dt , dy/ds , dy/dt ) dsdt |> |> f ... is a fouth order polynomal with only one extremum (minimum) |> |> x(s,t), y(s,t) ... are single-valued mappings (s,t) -> (x,y) whose |> jacobian is nowhere zero in A = \int dsdt |> |> dx/ds ... partial derivative of x with respect to s |> |> |> and some initial guess |> |> x^0 (s,t) , y^0 (s,t) in A |> |> with |> |> x(s,t) = x^0 (s,t) \ |> | on the boundary of A |> y(s,t) = y^0 (s,t) / i.e. the mapping is known only on the |> boundary |> |> |> Find the mapping |> |> x(s,t) , y(s,t) that minimizes F |> |> |> First of all: By means of variation calculus I can get the EULER |> equations of this problem. But they |> are too difficult to solve numerically. So I have to get the solution by ^^^^^^^^^^^^^ sounds not very promising |> minimizing the functional with |> a conjugate gradient type algorithm. snip there is a controversy whether first discretize (the integral, say by finite elements) - then minimize ( by an appropriate minimizer , in your case , sy a newton, quasi - newton, preconditioned conjugate gradient method (in some R^N, N>>1) or first minimize (formally) then discretize. you obviously want to go the second path. look up papers of Sachs and Kelley which describe how to do that 788.65067 Kelley, C.T.; Sachs, E.W. Pointwise Broyden methods. (English) [J] SIAM J. Optim. 3, No.2, 423-441 (1993). Pointwise quasi-Newton methods update the coefficients of differential and integral operators in function spaces. This paper gives a general theory of such methods and unifies it with the theory of Broyden's method in Hilbert space. In particular, a new superlinearly convergent method is introduced for elliptic boundary value problems. [ S.Zlobec (Montreal) ] Schlüsselwörter: superlinear convergence; pointwise quasi-Newton methods; Broyden's method; Hilbert space Kelley, C.T.; Sachs, E.W.; Watson, B. Pointwise quasi-Newton method for unconstrained optimal control problems. II. (English) [J] J. Optimization Theory Appl. 71, No.3, 535-547 (1991). The necessary optimality conditions for an unconstrained optimal control problem are used to derive a quasi-Newton method, where the update involves only second-order derivative terms. A pointwise update which was presented in part I of this paper [the first and second author, Numer. Math. 55, No. 2, 159-176 (1989; Zbl. 661.65068)] is changed to allow for more general second-order sufficiency conditions in the control problem. In particular, pointwise versions of the Broyden, PSB, and SR1 update are considered. A convergence rate theorem is given for the Broyden and PSB versions. 625.65104 Kelley, C.T.; Sachs, E.W. A quasi-Newton method for elliptic boundary value problems. (English) [J] SIAM J. Numer. Anal. 24, 516-531 (1987). The authors develop a quasi-Newton method based on the differential equation. As a consequence, sparsity properties of the discrete approximation are preserved in the iterates. An additional advantage of the method is that because it is specifically designed for differential equations, the updating algorithm incorporates information about the problems at hand and therefore can outperform more general quasi-Newton methods. A proof of superlinear convergence is included, and several computational examples are given. [ G.Hedstrom ] Schlüsselwörter: quasi-Newton method; updating algorithm; superlinear convergence; computational examples hope this helps peter