From: "Dr. Michael Albert" Subject: Re: Implicit Function Theorem Date: Wed, 10 Jan 2001 18:39:36 -0500 Newsgroups: sci.math To: "Paulo J. Matos aka PDestroy" On Wed, 10 Jan 2001, Paulo J. Matos aka PDestroy wrote: > Hi can anyone explain me this theorem in R^n and give me an example pls? Let x_1, . . . , x_m and y_1, . . ., y_p be variables, m+p = n, so that (x_1,...,x_m,y_1,...,y_p) can be thought of as an element of R^n. Now consider p-equations in these variables. f_i(x_1,...,x_m,y_1,...,y_p) = 0 i=1,...,p. The f_i's can also be thought of as defining a function from R^n to R^p. Now, as a rule of thumb, if the p equations are "independent", then one can "solve for" the y_1,..,y_p, ie, find functions g_1,...,g_p, each of (x_1,...,x_m) such that f_i(x_1,...,x_m, g_1(x_1,...,x_m), g_2(x_1,...,x_m), ..., g_p(x_1,...,x_m) ) = 0 identically. The implicit function theorm gives _sufficient_ condinitions to know that one can solve the equations locally, ie, that the g_1,...,g_n exist locally. For the formal statement, let (x_1,...,x_m,y_1,...,y_p)=(u_1,...,u_m,v_1,..,v_p) be a point which simulatenously solves the f_i's, ie, f_i( u_1, . . . , u_m, v_1, . . . , v_p) = 0. Further, assume the f_i are k-times continuously differentiable in some neighborhood of (u_1,...,u_m,v_1,...,v_p) (k>=1). Now consider the pxp matrix of partial derivatives \partial f_i / \partial y_j evaluated at (x_1,...,x_m,y_1,...,y_p)=(u_1,...,u_m,v_1,..,v_p). If this matrix is non-singular (invertible, determinant not zero), then there is a neighborhood of (u_1,...,u_m) in which the g_i's exist, are k-times continuously differentiable, and are unique. The key idea in the proof is if one considers z_i = f_i(x..., y....) then for a sufficiently small neighborhood the z_i's are approximately linear functions of the y_i's and if this linear approximation is invertible, so are the functions. Actually, an alternate version of the implicit function theorm gives h_i's which are functions of the x's and the z's in such a manner that z_i = f_i(x_1, . . . , x_m, h_1(x_1,...,x_m,z_1,...,z_p), ..., h_n(x_1,...,x_m,z_1,...,z_p) ) As an example, note that the implicit function theorm will fail if the condition on the matrix of partial derivatives doesn't work. As a simple example, let m=1, p=1, and f_1(x_1,y_1) = x_1^2 + y_1^2. If u_1=0, v_1=0, then f_1(u_1,v_1) = 0, but clearly there is no g_1 such that f_1( x_1, g(x_1) ) = 0 for x_1 in any neighborhood of x_1=0. As a second example of what can happen if the partial derivative test is droped from the hypothesis, consider f_1(x_1,y_1) = x_1^2 - y_1^2 here uniqeness fails. The proof of the implicit function theorm is generally proven as a result of the inverse function theorm. When you read the proof of the inverse function theorm in most books (I happen to have Lang's Differential and Riemannian Manifolds sitting near me) the heart of the proof is actually proving that Newton's method for solving an equation works in some small neighborhood. Also, the proof using this method is actually no easier in R^n than in the more general case of Banach spaces. The proof of the inverse function theorm for n=1 is not too bad--you might want to try it yourself as an excercise. One can even use the same technique to continue to R^n--basically, one starts with y_i = f_i(x_1,...,x_n) then one can sort of use the one dimensional case iteratively to "solve" for each x_i until one finally have x_i = g_i(y_1,...,y_n) as the inverse. I find this way of proving the result amusing, but in truth it is more difficult and, of course, doesn't generalize to the infinite dimensional case. Best wishes, Mike