From: Alexander Poquet Subject: Optimization Problem Date: 20 Dec 1999 23:19:51 GMT Newsgroups: sci.math Keywords: using Calculus of Variations to show straight lines are geodesic Ok, a while back I posted a lengthy post on the Calculus of Variations, to which I recieved a very helpful answer. Thanks. I have a similar problem, this one less lengthy. I am trying to use optimization principles from the calculus to show that a straight line is the shortest path between two points in the Euclidean Plane. If I define f(x) = sqrt( 1 + x^2 ), y(x) as the shortest path between x_1 and x_2, and n(x) as an arbitrary function, restricted in definiton only in that n(x_1) = n(x_2) = 0. then the function / x_2 I(a) = | f(y' + a*n') dx / x_1 is at a minimum when a = 0. I am trying to show that y(x) is a straight line. Differentiating I(a), I obtain / x_2 I'(a) = | f'(y' + a*n')*n' dx / x_1 Because I know that I(a) is at a minimum when a = 0, I know that I'(0) = 0. So I have (setting a = 0) / x_2 0 = | f'(y')*n' dx / x_1 | x_1 / x_2 0 = f'(y')*n(x) | - | (f''(y')*y'')*n(x) dx | x_2 / x_1 Of course the non-integral term is 0 because n(x_1) and n(x_2) are both equal to zero (by definition). So I have that the integral is equal to zero; and since n(x) is arbitrary, its coefficient must be zero to make the integral zero. Hence: 0 = f''(y'(x))*y''(x) Which implies that either y''(x) = 0 or f''(y'(x)) is equal to zero. If y''(x) is equal to zero, then y(x) must be constant, suggesting that a straight line minimizes the equation -- this is the desired result. But what about the other term? What does it mean? I expanded it out to try and figure out if it simplified to something obvious: f(x) = sqrt( 1 + x^2 ) f'(x) = x/sqrt( 1 + x^2 ) sqrt( 1 + x^2 ) - x/(2*sqrt( 1 + x^2 )) f''(x) = --------------------------------------- 1 + x^2 For simplicity, lets just let x be dy/dx. If this is equal to zero, then I can drop the denominator; it can never be zero for a real x, and since y is a real valued function its derivative is also a real valued function. I then end up with the numerator equal to zero: that means that sqrt( 1 + x^2 ) = x/(2*sqrt( 1 + x^2 )) 1 + x^2 = x/2 x^2 - x/2 + 1 = 0 That quadratic equation gives two complex roots. Since y(x) is real valued, y'(x) is real valued, so we have a situation that cannot occur: so I assume that means y''(x) = 0 is the sole solution to the differential equation (which is consistent, of course!) Am I right here, or is there some step I am missing? In most of the books I've seen this sort of problem done in, it seems much much more complicated. Also, is there any meaning associated with those complex roots? Do they suggest an alternative shortest route in some other space? Also, my original minimizing eq f'(y')y'' = 0 suggests that y'' = 0 is ALWAYS a solution, which implies that a straight line path will minimize any integral whose integrand is a function only of the derivative of said path. Can that be so? It seems to far-reaching. Thanks in advance. Alexander Poquet ============================================================================== From: foltinek@math.utexas.edu (Kevin Foltinek) Subject: Re: Optimization Problem Date: 21 Dec 1999 12:15:29 -0600 Newsgroups: sci.math In article <385eb997$0$209@nntp1.ba.best.com> Alexander Poquet writes: > I have a similar problem, this one less lengthy. I am > trying to use optimization principles from the calculus > to show that a straight line is the shortest path between > two points in the Euclidean Plane. > > If I define f(x) = sqrt( 1 + x^2 ), y(x) as the shortest > path between x_1 and x_2 > [snip] > Hence: > > 0 = f''(y'(x))*y''(x) > > Which implies that either y''(x) = 0 or f''(y'(x)) is > equal to zero. If y''(x) is equal to zero, then y(x) > must be constant, suggesting that a straight line > minimizes the equation -- this is the desired result. Right so far. > But what about the other term? What does it mean? > I expanded it out to try and figure out if it > simplified to something obvious: > > f(x) = sqrt( 1 + x^2 ) > f'(x) = x/sqrt( 1 + x^2 ) > sqrt( 1 + x^2 ) - x/(2*sqrt( 1 + x^2 )) > f''(x) = --------------------------------------- > 1 + x^2 Oops. f''(x) = (1+x^2)^(-3/2) which is never zero. But you do raise an interesting question: what would happen if the Euler-Lagrange equations gave complex solutions? Well, usually, this will not happen: the Euler-Lagrange equations for a Lagrangian L(x,y,y') are 0 = D2L(x,y,y') - d/dx (D3L(x,y,y')) or 0 = D2L(x,y,y') - D1D3L(x,y,y') - D2D3L(x,y,y')y' - D3D3L(x,y,y')y'' so if D3D3L is non-zero (or invertible if y is vector-valued), you obtain an ordinary differential equation with real coefficients, which will have a real solution (assuming everything is smooth). The only way that anything non-real can happen is when D3D3L is not invertible. There are two cases (in the case when y is a scalar): when D3D3L is identically zero, and when it is non-zero except for some places (generically these places will form a surface in (x,y,y') space). In the first case, the fact that D3D3L is identically zero means that D3L(x,y,y') = f(x,y), so L(x,y,y') = f(x,y)y'+g(x,y); and the Euler-Lagrange equations become 0 = D2fy'+D2g-D1f-D2fy' = D2g-D1f in which y' no longer appears at all! So the curve must lie in the zero-set of D2g-D1f. In the second case, things are more subtle and there's a branch of ODE theory which deals with these singular equations (though I don't know much about it). But, should you ever find an Euler-Lagrange equation of the form f(x,y,y')y''=0, you will have two cases of critical curves: y''=0, and f(x,y,y')=0 (which defines a first-order differential equation). None of this, though, addresses your question about complex solutions. There are two approaches to take here. 1) you can pose your variational problem on the set of real curves, and by doing so, any complex solution must be discarded (because it's not real). 2) you can pose your variational problem on the set of complex-valued curves. In this case, you're (probably) no longer looking for an extremal curve (because there is no "less than" in the complex numbers), but just looking for critical curves, those at which "the derivative is zero". Then everything works just fine as before (and, in fact, this is a useful thing to do in the complex case: it gives information about the topology of the spaces involved, for example). > In most of the books I've seen this sort of problem > done in, it seems much much more complicated. Usually the length-minimizing problem minimizes \int_{t1}^{t2} \sqrt{(dx/dt)^2+(dy/dt)^2} dt where you are looking at curves (x(t),y(t)). The reason we do this is because (1) the approach you took missed some of the solutions (the vertical lines), and (2) it generalizes (more easily) to other cases, for example, finding the shortest path on a sphere. > Also, my original minimizing eq f'(y')y'' = 0 suggests > that y'' = 0 is ALWAYS a solution, which implies that > a straight line path will minimize any integral whose > integrand is a function only of the derivative of > said path. Can that be so? It seems to far-reaching. Be careful about using the word "minimize". Just like f'(x)=0 does not imply that x is a minimum of f (consider f(x)=x^3 or f(x)=-x^2), a solution of the Euler-Lagrange equation does not necessarily give even a local minimum of the integral. There is something you can look at which is analogous to the second derivative test (actually it is the second derivative test, in a sense) which might tell you if you are at a local minimizer or local maximizer or just a critical point (curve). But yes, if L(x,y,y')=f(y'), then straight lines (y(x)=ax+b) will always be critical points of the integral \int L(x,y(x),y'(x)) dx. Kevin.