From: ullrich@math.okstate.edu (David C. Ullrich)
Subject: Re: defining Dirac's delta, or rather, "mathematical rigor"
Date: Fri, 14 Jul 2000 13:37:30 GMT
Newsgroups: sci.math
Summary: [missing]
On 13 Jul 2000 18:42:10 -0400, antoSPORK@mit.edu (J. Antonio Ramirez
R.) wrote:
>jmccarty@sun1307.ssd.usa.alcatel.com (Mike Mccarty Sr) writes:
>
>> Try Royden "Real Analysis". Or Kai Lai Chung (forget title).
>
>I have Royden, and as far as I can tell it only mentions derivatives
>of measures that are absolutely continuous (and I'm familiar with that
>case). Any hints as to the possible heading under which it might be?
>
>The measure concentrated at the origin is not absolutely continuous;
>in fact it's singular. I'm still curious as to what is meant by its
>derivative.
Oh, _that's_ the problem. I don't know whether Mike
mistyped or you misread: The delta "function" does have a
derivative in the sense of distributions, but that has nothing
to do with this absolutely-continuous stuff. The delta function
_is_ the derivative of the Heaviside function H(x) = 0
for x < 0, H(x) = 1 for x >= 0.
H is not integrable but it's locally integrable. It
has bounded variation, which is why its distribution
derivative is a measure, but H is not absolutely continuous,
so its derivative is not a (locally) integrable function.
Although H is not an absolutely continuous function
(not even continuous) the measure defined by mu(S) =
the integral over S of H _is_ an absolutely continuous
measure. That's life. (Part of the problem is that the
standard terminology about "differentiation" of measures
really should be talking about differentiation of the
indefinite integral of the measure. Say f(x) = x. Then
the derivative of f is 1. But when we have our
Radon-Nikodym hats on the "derivative" of f is just
the funtion f; f, or rather the measure defined by
f, _is_ the R-N derivative of the measure defined
by f.
>I seem to find *lots* of books by Kai Lai Chung in the library
>catalog...
>
>Thanks though.
>
>> In article ,
>> J. Antonio Ramirez R. wrote:
>> )jmccarty@sun1307.ssd.usa.alcatel.com (Mike Mccarty Sr) writes:
>> ) [...]
>
>> )What is a good reference for this sort of thing? Can one actually
>> )define a derivative for a singular measure? Obviously not as a
>> )function in L^1, but maybe in some other sense.
[deletia --djr]
==============================================================================
From: ullrich@math.okstate.edu (David C. Ullrich)
Subject: Re: defining Dirac's delta, or rather, "mathematical rigor"
Date: Sat, 15 Jul 2000 12:56:58 GMT
Newsgroups: sci.math
On 14 Jul 2000 15:36:07 -0400, antoSPORK@mit.edu (J. Antonio Ramirez
R.) wrote:
>ullrich@math.okstate.edu (David C. Ullrich) writes:
>
>> On 13 Jul 2000 18:42:10 -0400, antoSPORK@mit.edu (J. Antonio Ramirez
>> R.) wrote:
>>
>> >jmccarty@sun1307.ssd.usa.alcatel.com (Mike Mccarty Sr) writes:
>> >
>> >> Try Royden "Real Analysis". Or Kai Lai Chung (forget title).
>> >
>> >I have Royden, and as far as I can tell it only mentions derivatives
>> >of measures that are absolutely continuous (and I'm familiar with that
>> >case). Any hints as to the possible heading under which it might be?
>> >
>> >The measure concentrated at the origin is not absolutely continuous;
>> >in fact it's singular. I'm still curious as to what is meant by its
>> >derivative.
>>
>> Oh, _that's_ the problem. I don't know whether Mike
>> mistyped or you misread: The delta "function" does have a
>> derivative in the sense of distributions, but that has nothing
>> to do with this absolutely-continuous stuff. The delta function
>> _is_ the derivative of the Heaviside function H(x) = 0
>> for x < 0, H(x) = 1 for x >= 0.
>
>OK, but what kind of object is the delta function? In one post I said
>that the delta function can be thought of as the measure such that
>m(A)=1 iff 0\in A, 0 otherwise, because that's what I had been led to
>believe after reading Lang's Real and Functional Analysis (I think).
That's correct, it is that measure. (On another day of the
week we'd regard it as a distribution instead of a measure.)
>What puzzled me is that Mike said that the delta "function" was not
>this measure, but the derivative of this measure,
If he said that he mis-spoke. (I think...)
>so I was curious to
>know in what sense. With your comments this is clearer now.
>
>It seems to me that the most important formal property of the delta
>function is that the integral of \delta times f equals f(0), and this
>formal property is preserved by the measure concentrated at the
>origin.
That's exactly right. Well, not literally _exactly_ right,
it would be a better idea formally to call it the integral of f with
respect to delta, not the integral of f times delta. But you knew
that - as far a motivation/what does it all mean goes, that's
exactly right. (Actually come to think of it it's even righter than
that: if f is continuous and mu is a measure then there is a
well-defined notion of "f times mu", and the integral of f times
mu is the same as the integral of f wrt mu.)
[deletia --djr]
==============================================================================
From: david_ullrich@my-deja.com
Subject: Re: defining Dirac's delta, or rather, "mathematical rigor"
Date: Sat, 15 Jul 2000 18:24:55 GMT
Newsgroups: sci.math
In article <8kq3ea$r0q$1@sshuraac-i-1.production.compuserve.com>,
"Jan C. Hoffmann" <100550.3643@compuserve.com> wrote:
>
> David C. Ullrich schrieb in im Newsbeitrag:
> 39706030.129658172@nntp.sprynet.com...
> > On Fri, 14 Jul 2000 21:55:07 +0200, "Jan C. Hoffmann"
> > <100550.3643@compuserve.com> wrote:
> [..]
> > >> Heaviside function H(x) = 0 for x < 0, H(x) = 1 for x >= 0
> >
> > Because of the mangled nature of the citation I really
> > have no idea what you're saying here. If you give a coherent
> > quote of something I said that you think is incorrect I'll
> > explain.
>
> Thanks. Sorry if I mangled something. What I am thinking about is that there
> should be a small range x (in German literature called epsilon) where the
> delta distribution is located.
I don't know _exactly_ what that means. The delta "function"
is a distribution, as well as being a measure. And by virtue
of being a distrbution or a measure it has a "support" - that's
a precisely-defined version of "the set where it is located".
And in fact the support of the delta function is {0}. Not
{-eps/2, eps/2), it's exactly {0}, ie [0,0]. I can prove
that if you want.
> The Heaviside function seems not to have such a small range.
>
> I didn't think you are incorrect. I just couldn't find out where the delta
> distribution is located in case the Heaviside function is applied.
I don't know exactly what this means.
Among other things, delta is a measure with compact
support. If mu is a measure with compact support and f
is a continuous function then the intgral of f with
respect to mu is well-defined. But H is not a continuous
function.
If we take H to be literally the function with
H(x) = 0 for x < 0 and H(x) = 1 for x >= 0 then it
happens that the integral of H wrt delta is also
well-defined, and the integral is 1, since H(0) = 1.
But that's probably not the best way to look at it -
the function H just _happens_ to be integrable wrt
delta in spite of being non-continuous. If this
bothers you because you feel that the value of
H(0) should not matter that's fine: A person can
also regard a measure with compact support as a
linear functional on the space of continuous
functions - from that point of view you simply
_cannot_ "apply" delta to H.
Nobody said anything about applying H to delta
or applying delta to H. What's been said is that
delta is the (distributional) _derivative_ of H.
This is very easy to prove, once we have the
definition of "derivative in the sense of
distributions" straight.
Let's say S = {infinitely differentiable
functions with compact support}. The elements
of S are called test functions. A _distribution_
is by definition a linear functional on S, bounded
with respect to a certain topology. If D and L are
two distributions then we say that D is the derivative
of L if
D(f) = - L(f')
for all test functions f. (This is motivated by
integration by parts - think of L(f) as the
integral of L times f...).
Now H "is" a distribution, if we say that
H(f) = the integral of H times f,
and delta is a distribution if we say that
delta(f) = f(0),
for test functions f. So to show that delta is
the derivative of H we need to show that
delta(f) = - H(f')
for all f; in other words we need to show that
f(0) = - the integral of f' from 0 to infinity,
which is clear (since f has compact support).
[deletia --djr]
Sent via Deja.com http://www.deja.com/
Before you buy.