From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: f(x+1)-f(x)=g(x) Date: 22 Jan 2000 10:30:00 -0500 Newsgroups: sci.math Summary: [missing] In article <86c2os$bhm$1@news.online.de>, Jürgen Will wrote: :Hallo, : :there is an equation with real functions f(x+1)-f(x)=g(x), where g(x) is :given and f(x) is wanted. : :Is there a special branch of mathematics which deals with such kind of :equations? :How could you tackle this problem? :Thanks. It is a chapter called "Summation" in the branch of Mathematics calles "Difference Calculus" (Differenzenrechnung). Methods were known already to Newton and Gregory, later improved with the development of Euler-Maclaurin Formula, using Bernoulli polynomials. This formula is found, among others, in more advanced Numerical Analysis textbooks. Other big names in summation are Boole, Laplace, Stirling and of course, Gauss (the list makes no claims to completeness). Classical books in this direction are Noerlund: Differenzenrechnung, and Steffensen: Interpolation ; both re-published by the Chelsea Company. Look up Euler-Maclaurin Formula which relates sums to integrals. Also, an elementary method os summing a polynomial consists of expanding it in "binomial terms" because they have neat sums (i.e. solutions of the basic difference equation): (x+1 choose n+1) - (x choose n+1) = (x choose n) (the difference analogue to (d/dx) (x^(n+1)/(n+1)!) = x^n/n! ). This expansion can be done using Newton's formula for interpolation, built around divided differences. Again, today the audience is mainly the classically minded numerical computation crowd. Good luck, ZVK(Slavek). ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: f(x+1)-f(x)=g(x) Date: 22 Jan 2000 17:43:56 -0500 Newsgroups: sci.math In article <86c2os$bhm$1@news.online.de>, Jürgen Will wrote: :Hallo, : :there is an equation with real functions f(x+1)-f(x)=g(x), where g(x) is :given and f(x) is wanted. : :Is there a special branch of mathematics which deals with such kind of :equations? :How could you tackle this problem? :Thanks. I gave references elsewhere. Here I will just give you an idea about the freedom in solving such equations. I assume that g is defined on some interval [a, infinity), although other sets (so-called inductive sets) can be considered. Notation: floor(x) is the largest integer less than or equal to x (the "round-off towards minus infinity"). So: floor(-3) = -3, floor(-3.2) = -4, floor (3.6) = 3 , as examples. Sums over the empty set of indices (say, 1<=j<=0) are defined as 0. The "flat" solution is f(x) = 0 for all x in [a, a+1), else f(x) = sum[j = 0 to floor(x-a)-1] g(x + j - floor(x-a)) The mystery disappears if you give an example: f(a+3.8) = g(0.8) + g(1.8) + g(2.8) f(a+4.8) = g(0.8) + g(1.8) + g(2.8) + g(3.8) and then you check that f(a+4.8) - f(a+3.8) = g(3.8) . All other solutions F(x) are obtained by F(x) = f(x) + p(x) where p is a periodic function with period 1, that is, p(x+1) = p(x) for all x. Contrary to another reply, there are more than "continuum" examples of that p. There are c^c such examples, c^c being a cardinal number actually equal to 2^(2^aleph_0) where aleph_0 is the cardinal number counting all integers (and ^ denotes cardinal exponentiation). The flat solution is not always the most welcome one. The interesting solutions are the continuous, or differentiable, or "well-behaved near infinity", or subject to some other restrictions, or a combinations of the above. For example, it is desirable that the "summation" of a polynomial be given as a polynomial. That is what makes the works of Noerlund and others worth looking at. A result of Bohr's and Mollerup's, for example, singles out the logarithm of the gamma function as the result of summation of the logarithmic function by imposing an initial condition and convexity on it. Another result, I think by Hermite or Hadamard, states that this gamma function, together with its logaritm, cannot be obtained as the solution of a differential equation of any (finite) order which has the form of a multivariate polynomial in the variables and derivatives. This puts a severe limitation on the validity of another reply, namely that "the summation of a function could be reduced to solving a differential equation". Cheers, ZVK(Slavek). ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: f(x+1)-f(x)=g(x) Date: 24 Jan 2000 01:11:31 -0500 Newsgroups: sci.math In article <86fbet$1da$1@epos.tesco.net>, Iain Davidson wrote: : :Zdislav V. Kovarik wrote in message :news:86dbrc$sq6@mcmail.cis.McMaster.CA... :> In article <86c2os$bhm$1@news.online.de>, :> Jürgen Will wrote: :> :Hallo, : :> The flat solution is not always the most welcome one. The interesting :> solutions are the continuous, or differentiable, or "well-behaved near :> infinity", or subject to some other restrictions, or a combinations of :> the above. For example, it is desirable that the "summation" of a :> polynomial be given as a polynomial. That is what makes the works of :> Noerlund and others worth looking at. [...] : :So how would you go about solving :f(x+1) - f(x) = sqrt(x) or The question admits many interpretations: the flat solution (trace it back in this thread) is a legitimate answer because you did not specify any additional qualities of the required solution. Taking advantage of given circumstances, I can offer a solution which is infinitely differentiable on [0, inf), and its derivatives (equipped with appropriate signs) are ultimately completely monotone: f(x) = (2/3)*x^(3/2) - (1/2)*x^(1/2) + F(x) where F(x) = (1/6)*sum[j=1 to inf.] (sqrt(x+j+1)+sqrt(x+j))^(-3) :f(x+1) -f(x) = exp(sin(x^2)) Here I would have to go to Noerlund's book for a solution which would behave nicely near infinity. The result would be in the form of the limit of a parametric family of infinite series of "nice" functions. The leading idea is: if g(x) goes to 0 fast enough as x goes to infinity (say monotone and integrable), then u(x) = C - sum[j=0 to inf] g(x+j) satisfies u(x+1) - u(x) = g(x), and inherits some monotonicity properties. With some ingenuous "damping", one can exploit this in divergent situations, too. : :Some of the older books on finite differences use Heaviside-type methods :using difference operators, increment operators and so on. : References, please? :So : : f(x+1) - f(x) = sqrt(x) : :would become : :(D + (D^2)/2 + (D^3)/6 + ..)f(x) = g(x) : :After algebraic manipulation : :f(x) = (1/D)(1 -D/2 + (D^2)/12 - (D^4)/720 + ...)g(x) : :Does this mean that the summation of a function could be :reduced to an infinite integro-differential equation ? Yes and no. It is still not in a form one could use to write a program for evaluating f(x) (even if efficiency would not be an issue). With some more theory and manipulation, one would re-discover Bernoulli numbers and Euler-Maclaurin Formula. It is known that trying to apply the infinite expansion in this raw form often leads to rapidly divergent series. Cheers, ZVK(Slavek). ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: f(x+1)-f(x)=g(x) Date: 27 Jan 2000 01:44:42 -0500 Newsgroups: sci.math In article <388FBE9C.D5425FD4@physics.uwa.edu.au>, Paul Abbott wrote: :"Zdislav V. Kovarik" wrote: :> :So how would you go about solving :> :f(x+1) - f(x) = sqrt(x) or [...] :> Taking advantage of given circumstances, I can offer a solution which :> is infinitely differentiable on [0, inf), and its derivatives :> (equipped with appropriate signs) are ultimately completely monotone: :> :> f(x) = (2/3)*x^(3/2) - (1/2)*x^(1/2) + F(x) where :> :> F(x) = (1/6)*sum[j=1 to inf.] (sqrt(x+j+1)+sqrt(x+j))^(-3) : :Surely you mean sum[j=0 to inf.] ... ? : My mistake; yes, summation from zero. Thanks. Cheers, ZVK(Slavek). :Cheers, : Paul