From: ullrich@math.okstate.edu (David C. Ullrich) Subject: Re: Q: Fourier Transform Weirdness Date: Mon, 27 Nov 2000 15:47:01 GMT Newsgroups: sci.math Summary: [missing] On Mon, 27 Nov 2000 13:10:24 GMT, Dan Greenfield wrote: >Hi, > > I was hoping someone out there could clarify something for me: > >why is Integral(exp(iwx)dx) from -infinity to infinity = zero everywhere >but w = 0? It isn't. There are lies in those books... >I don't understand how an integral of say w=1 yielding >Integral((cos(x)+isin(x))dx) could be zero when taking the 'boundaries' >-infinity and infinity. Surely it is absurd to give exp(i*infinity) any >definite value let alone presume to say it cancels with exp(-i*infinity) >? Yes, it's absurd. >And yet the (continuous) fourier transform of a constant is merely a >multiple of delta(w). Is there something I am missing? But nonetheless the Fourier Transform of a constant _is_ a multiple of that "delta function" (the delta function is _not_ a function, it's something else!). The Fourier Transform of 1 is delta. But the Fourier Transform is _not_ defined by that integral that your book says it's defined by! If f is a "nice" function then the Fourier Transform F(f) is defined by that integral (here saying that the integral of |f| from -ininifty to infinity is finite is nice enough.) A person uses that integral to define the FT of nice functions. Then a person notices that (*) int(f*F(g)) = int(F(f)*g) if f and g are both nice functions (here "int" means "integral from -infinity to infinity", F(f) is the FT of F and F(g) is the FT of g, and * denotes multiplication (not convolution). At this point the definition of the FT gets extended from just nice functions to a much larger class of objects, called distributions. The FT of a distribution is _not_ defined as the integral of exp(-itx) times the distribution. Exactly how it is defined is a long story. But it's defined in such a way that (*) remains valid if f is a distribution and g is an extremely nice function (extremely nice is a little more than nice...) In fact (*) is more or less the definition of the FT of a distribution, although it takes a little while to say exactly what I mean by that. Now what should the FT of the constant 1 be? It's delta. I'm not going to say exactly what delta _is_, but we know what delta is supposed to _do_: delta is supposed to have the property that (**) int(g*delta) = g(0) if g is nice enough. In fact (**) is more or less the definition of the "delta function", actually. So verifying that the F(1) = delta amounts to verifying that (*) is correct if we set f=1 and F(f) = delta. And that works: Using (**) we see int(f*F(g)) = int(F(g)) = g(0) = int(delta*g), and if F(f) = delta then this says int(f*F(g)) = int(F(f)*g) as desired. This is sort of a "top-down" approach to the whole thing. There's a more "bottom-up" approach in ZK's reply. If you really want to know how the math works you need to study "distributions". (These are not the "distributions" that come up in probability, btw.) >- DG >