From: artfldodgr@my-deja.com Subject: Re: differentiable surjections Date: Sat, 24 Jun 2000 03:06:16 GMT Newsgroups: sci.math Summary: [missing] In article <01bfdc4a$dcde25a0$e7ce8ad1@daves-dell>, "David C. Ullrich" wrote: > > > macavity wrote in article > <2a6077c8.595ab979@usw-ex0106-047.remarq.com>... > [...] > > > > Yes, coming to think of it, I have never come across an > > extension of Rolle into any dimension other than 1. But never > > thought it (or an appropriate reformulation - based on the total > > derivative) would not hold - the arguments involved being rather > > basic. > > They're rather basic but also rather specific to R. How > do you prove Rolle's theorem? WLOG f(a) = f(b) = 0; > now if f is not identically zero look at a global extremum > in (a,b); f' must vanish at that point. > > The most reasonable interpretation of "extremum" > for a function from R^n to itself might be an extremum of > |f|. But there's no reason all the partials of all the components > of f have to vanish at such a point. > > The slogan I was taught years ago was that MVT > fails for R^n-valued functions, although "all" of its > consequences remain true (eg bounds on the partial > derivative imply Lipschitz estimates.) One explanation for this is the Mean Value Inequality for f: R^n --> R^m, one form of which is the following: If f is a C^1 mapping of a convex set U (contained in R^n) into R^m, and if x and y are two points of U, then |f(x)-f(y)| <= M*|x-y|, where M is the sup of |f'(z)| (operator norm) as z varies over the line segment connecting x to y. For a proof see Blue Rudin. --A. Sent via Deja.com http://www.deja.com/ Before you buy. ============================================================================== From: david_ullrich@my-deja.com Subject: Re: differentiable surjections Date: Sat, 24 Jun 2000 17:12:18 GMT Newsgroups: sci.math Summary: [missing] In article <8j18j1$2vp$1@nnrp1.deja.com>, artfldodgr@my-deja.com wrote: > In article <01bfdc4a$dcde25a0$e7ce8ad1@daves-dell>, > "David C. Ullrich" wrote: [...] > > > > The slogan I was taught years ago was that MVT > > fails for R^n-valued functions, although "all" of its > > consequences remain true (eg bounds on the partial > > derivative imply Lipschitz estimates.) > > One explanation for this is the Mean Value Inequality for f: R^n --> R^m, > one form of which is the following: If f is a C^1 mapping of a convex > set U (contained in R^n) into R^m, and if x and y are two points of U, > then |f(x)-f(y)| <= M*|x-y|, where M is the sup of |f'(z)| (operator > norm) as z varies over the line segment connecting x to y. A person might regard this as more an _instance_ of the above than an "explanation" for it. Never mind. > For a proof see Blue Rudin. Or choose a unit vector v, let F(t) = and apply the MVT to F to deduce that || <= M*|x-y|, then conclude that since this holds for all v we must have |f(x) - f(y)| <= M*|x-y| . Which must be the proof in B. Rudin - he does that a lot. I mention it just to point out that the proofs of these consequences of MVT in contexts where MVT does not hold often simply _use_ the one-variable MVT. > --A. > > Sent via Deja.com http://www.deja.com/ > Before you buy. > Sent via Deja.com http://www.deja.com/ Before you buy.