From: Robin Chapman Subject: Re: x! Date: Sun, 02 Jul 2000 06:36:48 GMT Newsgroups: sci.math Summary: [missing] In article , Ronald Bruck wrote: > In article , "Ertai" > wrote: > > :While I was playing with Derive I noticed that its version of the > :factorial > :was defined even with real numbers: I think this is an extension of the > :normal factorial, could someone give me the definition of this function? > :Anyway, I know this could be a strange curiosity, but there is a value for > :the constant k in this formula: > :lim (x->oo) e^(x^k) / x! = n > :such that n is nor 0 neither oo, but a real number > 0? > :With derive, and so this trial was quite experimental, I've noticed that > :for > :k=1, n=0 and for k=2, n=oo, so, if this supposition is true and _there is_ > :an n real > 0, 1 :result... anyway, an analytical calculation is surely better. > :Thanx to everyone! > > The extension you're looking for is called the gamma function, and > Derive is smart enough to know about it. > > Perhaps the best definition of it is > > n! n^x > gamma(x) = lim ------------------- > n->oo x(x+1)(x+2)...(x+n) > > The limit exists for all complex numbers x EXCEPT x = 0, -1, -2, ... > (obviously). That makes it far superior to the integral definition, > > gamma(x) = integral of e^-t t^x dt from 0 to infinity. > > It's easy to see that gamma satisfies the functional equation > > gamma(x+1) = x gamma(x) > > and it's not hard to see that gamma(n) = (n-1)! for positive integers n. > It also has the property that log gamma(x) is a convex function of x > > 0, which characterizes the gamma function among all functions which > satisfy the functional equation and for which g(1) = 1 (Bohr-Mollerup > theorem). > > As for your asymptotic result, it's wrong. Here's one way to see the > correct development (due to Binet): set > > gamma(x) = sqrt(2 pi x) (x/e)^x e^mu(x) > > (i.e. DEFINE mu(x) by this equation). Then we deduce from the > functional equation that > > (*) mu(x) - mu(x+1) = (x+1/2) log(1+1/x) - 1. > > Call the expression on the right-hand side g(x). Here's the bright > idea: we try to write > > (**) mu(x) = g(x) + g(x+1) + g(x+2) + ..., > > **IF** this series converges, then mu(x) will satisfy the recurrence > (*). It's not hard to find the series representation of g(x) as > > g(x) = 1/3 (2x+1)^-2 + 1/5 (2x+1)^-4 + ..., > > for |2x+1| > 1, and for such x the series in (**) DOES converge. In > fact, > > g(x) < 1/3 (2x+1)^-2 (1 + (2x+1)^-2 + (2x+1)^-4 + ...) > > 1 1 > = --- - ------- > 12x 12(x+1) > > This shows that 0 < mu(x) < 1/(12x), which gives us the inequality > > sqrt(2 pi x) (x/e)^x < gamma(x) < sqrt(2 pi x) (x/e)^x e^(1/(12x)). Very impressive, but where on earth does that sqrt(2 pi) come from :-) > I first saw this exposition in an old book on complex analysis by > Caratheodory. -- Robin Chapman, http://www.maths.ex.ac.uk/~rjc/rjc.html "`The twenty-first century didn't begin until a minute past midnight January first 2001.'" John Brunner, _Stand on Zanzibar_ (1968) Sent via Deja.com http://www.deja.com/ Before you buy. ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: x! Date: 2 Jul 2000 03:30:36 -0400 Newsgroups: sci.math In article <8jmnu0$8nt$1@nnrp1.deja.com>, Robin Chapman wrote: :In article , : Ronald Bruck wrote: :> In article , "Ertai" :> wrote: :> :> :While I was playing with Derive I noticed that its version of the :> :factorial :> :was defined even with real numbers: I think this is an extension of :> :the normal factorial, could someone give me the definition of this :> :function? :> :Anyway, I know this could be a strange curiosity, but there is a :> :value for the constant k in this formula: :> :lim (x->oo) e^(x^k) / x! = n :> :such that n is nor 0 neither oo, but a real number > 0? :> :With derive, and so this trial was quite experimental, I've noticed :> :that for k=1, n=0 and for k=2, n=oo, so, if this supposition is true :> :and _there is_ an n real > 0, 1 :an approximate result... anyway, an analytical calculation is surely :> :better. :> :Thanx to everyone! :> :> The extension you're looking for is called the gamma function, and :> Derive is smart enough to know about it. :> :> Perhaps the best definition of it is :> :> n! n^x :> gamma(x) = lim ------------------- :> n->oo x(x+1)(x+2)...(x+n) :> :> The limit exists for all complex numbers x EXCEPT x = 0, -1, -2, ... :> (obviously). That makes it far superior to the integral definition, :> :> gamma(x) = integral of e^-t t^x dt from 0 to infinity. :> :> It's easy to see that gamma satisfies the functional equation :> :> gamma(x+1) = x gamma(x) :> :> and it's not hard to see that gamma(n) = (n-1)! for positive integers :> n. It also has the property that log gamma(x) is a convex function of :> x > 0, which characterizes the gamma function among all functions :> which satisfy the functional equation and for which g(1) = 1 :> (Bohr-Mollerup theorem). :> :> As for your asymptotic result, it's wrong. Here's one way to see the :> correct development (due to Binet): set :> :> gamma(x) = sqrt(2 pi x) (x/e)^x e^mu(x) :> :> (i.e. DEFINE mu(x) by this equation). Then we deduce from the :> functional equation that :> :> (*) mu(x) - mu(x+1) = (x+1/2) log(1+1/x) - 1. :> :> Call the expression on the right-hand side g(x). Here's the bright :> idea: we try to write :> :> (**) mu(x) = g(x) + g(x+1) + g(x+2) + ..., :> :> **IF** this series converges, then mu(x) will satisfy the recurrence :> (*). It's not hard to find the series representation of g(x) as :> :> g(x) = 1/3 (2x+1)^-2 + 1/5 (2x+1)^-4 + ..., :> :> for |2x+1| > 1, and for such x the series in (**) DOES converge. In :> fact, :> :> g(x) < 1/3 (2x+1)^-2 (1 + (2x+1)^-2 + (2x+1)^-4 + ...) :> :> 1 1 :> = --- - ------- :> 12x 12(x+1) :> :> This shows that 0 < mu(x) < 1/(12x), which gives us the inequality :> :> sqrt(2 pi x) (x/e)^x < gamma(x) < sqrt(2 pi x) (x/e)^x e^(1/(12x)). : :Very impressive, but where on earth does that sqrt(2 pi) come :from :-) : :> I first saw this exposition in an old book on complex analysis by :> Caratheodory. sqrt(2*pi) is the limit of n! * exp(n) / n^(n+1/2) as n goes to infinity. (Compare with the inequalities.) This limit can be converted to an infinite product which is evaluated using Wallis's product. One form of Wallis's product is pi/2 = lim[n to inf] (2 * 4 * ... * (2*n))^2 / ((2*n) * (1 * 3 * ... * (2*n-1))^2) and that can be obtained by comparing integrals of (sin(t))^n over [0, pi/2] which are known explicitly by reduction formulas. (They have different product expressions for n even and n odd, and this helps.) Cheers, ZVK(Slavek) ============================================================================== From: Ronald Bruck Subject: Re: x! Date: Sun, 02 Jul 2000 13:49:14 -0700 Newsgroups: sci.math In article <8jns7h$n$1@nnrp1.deja.com>, Robin Chapman wrote: :In article <8jmr2s$8dm@mcmail.cis.McMaster.CA>, : kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) wrote: :> In article <8jmnu0$8nt$1@nnrp1.deja.com>, :> Robin Chapman wrote: :> :In article , :> : Ronald Bruck wrote: : :> :> This shows that 0 < mu(x) < 1/(12x), which gives us the inequality :> :> :> :> sqrt(2 pi x) (x/e)^x < gamma(x) < sqrt(2 pi x) (x/e)^x :e^(1/(12x)). :> : :> :Very impressive, but where on earth does that sqrt(2 pi) come :> :from :-) :> : :> :> I first saw this exposition in an old book on complex analysis by :> :> Caratheodory. :> :> sqrt(2*pi) is the limit of :> :> n! * exp(n) / n^(n+1/2) :> :> as n goes to infinity. (Compare with the inequalities.) This limit can :be :> converted to an infinite product which is evaluated using Wallis's :> product. One form of Wallis's product is :> :> pi/2 = lim[n to inf] :> :> (2 * 4 * ... * (2*n))^2 / ((2*n) * (1 * 3 * ... * (2*n-1))^2) :> :> and that can be obtained by comparing integrals of (sin(t))^n over :> [0, pi/2] which are known explicitly by reduction formulas. (They have :> different product expressions for n even and n odd, and this helps.) : :Again, very nice, but I was wondering where in Ron's argument :the sqrt(2 pi) entered ... :-) I checked Caratheodory's book, and he doesn't use sqrt(2pi)--he uses a generic constant a. You can't go from the difference equation mu(x) - mu(x+1) = g(x) to the assertion that mu(x) = g(x) + g(x+1) + ... unless you know that mu(x) --> 0 as x --> \infty. If you choose a to be something OTHER than \sqrt{2\pi}, it converges to a non-zero constant, not 0. You need to KNOW Stirling's formula in order to estimate mu(x). Zdislav gives a very nice derivation of the constant. I've seen another derivation in R. C. Buck's advanced calculus book, somewhat along the lines of integrating log x but done more carefully--I don't remember the details. Can anyone oblige (I don't have the book at hand)? --Ron Bruck -- Due to University fiscal constraints, .sigs may not be exceed one line. ============================================================================== From: "Larry Shultis" Subject: Re: x! Date: Mon, 3 Jul 2000 12:22:04 -0500 Newsgroups: sci.math > Zdislav gives a very nice derivation of the constant. I've seen another > derivation in R. C. Buck's advanced calculus book, somewhat along the > lines of integrating log x but done more carefully--I don't remember the > details. Can anyone oblige (I don't have the book at hand)? The proof is a little too involved to reproduce here (too many integrals). It is section 4.5 of chapter 4 of the 1956 edition. He comes down to proving: sqrt(2pi/(1+epsilon) < GAMMA(x+1)/(x^x e^x sqrt(x)) < sqrt(2pi/(1-epsilon) Your memory seems to be intact. Mine wasn't. I took the advanced calculus course from Buck back in 1964 at UW-Madison. Larry > > --Ron Bruck > > -- > Due to University fiscal constraints, .sigs may not be exceed one > line. ============================================================================== From: Ronald Bruck Subject: Re: x! Date: Mon, 03 Jul 2000 11:36:51 -0700 Newsgroups: sci.math In article <_X385.1378$ef3.388857@homer.alpha.net>, "Larry Shultis" wrote: > > Zdislav gives a very nice derivation of the constant. I've seen > > another > > derivation in R. C. Buck's advanced calculus book, somewhat along the > > lines of integrating log x but done more carefully--I don't remember > > the > > details. Can anyone oblige (I don't have the book at hand)? > > The proof is a little too involved to reproduce here (too many > integrals). > It is section 4.5 of chapter 4 of the 1956 edition. He comes down to > proving: > > sqrt(2pi/(1+epsilon) < GAMMA(x+1)/(x^x e^x sqrt(x)) < > sqrt(2pi/(1-epsilon) > > Your memory seems to be intact. Mine wasn't. I took the advanced > calculus course from Buck back in 1964 at UW-Madison. > Larry Hmmm. Mine wasn't so good after all, because when I came in to work today (just to check this--no one else is here), I found the argument to be more complicated than I remembered. (I took Math 204, using Buck's book, at the U of Chicago in Winter '63; the instructor was Robert Welland, now at Northwestern. Would to God **we** could use it for **our** multivariable calculus course.) Buck begins by noting that gamma(x+1)/(x^x x) = \int_0^\infty (te^-t)^x dt and breaks this integral up into pieces from 0 to 1-delta, 1-delta to 1+delta, and 1+delta to infinity. The first and last of these are no problem (go to zero); only the middle is of interest. He writes this as I = e^{-x} \int_{-delta}{delta} [(s+1)e^-s]^x ds and then makes the approximation (*) (1+s)e^-s = exp(-s^2 h(s)/2) for |s| <= 1/2, where h(s) --> 1 as s --> 0. (Compare power series.) This brings him to the necessity of estimating \int_{-delta}^delta exp(-c x s^2) ds as x --> infinity. A change of variable and an appeal to the fundamental theorem of calculus (and a knowledge of the integral of exp(-x^2) on the real line) show that lim_{x \to \infty} \sqrt{x} \int_{-delta}^delta exp(-c x s^2) ds = \sqrt{\pi/c}, from which the rest is routine. A pretty argument, and rather straight-forward once you get the idea of using (*), but much more elaborate than I remembered. Somehow I had got it in my head that it was a refinement of the argument that \int_1^x log t dt = (t log t - t)|_{t=1}^x approximates log x! but cobwebs accumulate over the years... Anyway, THAT's where the \sqrt{2\pi} comes from. I've decided I prefer Zdislav's Wallis product argument. --Ron Bruck -- Due to University fiscal constraints, all .sigs must be only one line.