From: israel@math.ubc.ca (Robert Israel)
Subject: Re: Combining Distributions
Date: 10 May 1999 21:50:00 GMT
Newsgroups: sci.math
Keywords: arithmetic of distributions
In article <01be9b26$26024ca0$058b88d0@us62351846-sdd>,
steve wrote:
>When a and b are variables which can each be represented by Normal
>Distributions with different means and std deviations, what is the mean and
>std deviation of:
>a+b?
>a-b?
>a*b?
>a/b?
>
>Experimentally I have determined that the resultant mean is simply the
>arithmetic result, but the deviations are obviously somewhat more
>complicated. Anyone knowing the answer, or suggesting a resource would be
>greatly appreciated. Please post to the newsgroup. Thanks.
The mean of a sum of two random variables is always the sum of the means.
Similarly for difference. This should be prominently stated in any
elementary probability text (but I'm often disappointed to see that it
isn't).
The standard deviation of a sum or difference of _independent_ random
variables is the square root of the sum of the squares of the standard
deviations of each.
If a and b are independent with means ma and mb and standard deviations
sa and sb respectively, then a b has mean ma mb and standard deviation
sqrt( sa^2 mb^2 + sb^2 ma^2 + sa^2 sb^2).
If a and b are independent random variables, a not almost surely 0, and b
has a density that is continuous and nonzero in a neighbourhood of 0,
then a/b does not have a mean or standard deviation. The problem is
that values of b close to 0 make a/b very large, and this happens too
often for the mean to exist. In particular, if a and b are independent
normal random variables with mean 0, a/b has a Cauchy distribution.
Robert Israel israel@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia
Vancouver, BC, Canada V6T 1Z2
==============================================================================
From: "Patrick Powers"
Subject: Re: Combining Distributions
Date: Mon, 10 May 1999 15:45:19 -0700
Newsgroups: sci.math
Robert Israel wrote in message <7h7ka8$isv$1@nntp.ucs.ubc.ca>...
>If a and b are independent random variables, a not almost surely 0, and b
>has a density that is continuous and nonzero in a neighbourhood of 0,
>then a/b does not have a mean or standard deviation. The problem is
>that values of b close to 0 make a/b very large, and this happens too
>often for the mean to exist. In particular, if a and b are independent
>normal random variables with mean 0, a/b has a Cauchy distribution.
>
If I may add....
This result is often difficult to believe at first, particularly with the
Cauchy distribution,
which is symmetric about 0. It would seem that the mean must then be 0.
The non-technical explanation is that if you were to try to estimate the
mean of a Cauchy random variable by taking the average of samples, this
average would NOT tend to any particular value as the sample size increases.
The technical explanation is that the integral which defines the mean does
not converge.
==============================================================================
From: pmontgom@cwi.nl (Peter L. Montgomery)
Subject: Re: Var(XY) for X,Y indep.?
Date: Sun, 20 Jun 1999 08:25:36 GMT
Newsgroups: sci.math
In article
"Pansy" writes:
>
>James wrote in message
>news:7khand$8rl$1@vixen.cso.uiuc.edu...
>>
>> Hello!
>>
>> I would like to ask a basic question(but difficult to me).
>> What is the result of
>>
>> Var(XY) for X,Y are independent(continous)?
>>
>> Is there any way to get it without calculating it
>> using joint density function?
>
>Well, the joint [probability] density function is the product of the
>marginal PDFs in this case, so likewise Var(XY) would equal Var(X)Var(Y)
>under appropriate convergence conditions.
No. Suppose X and Y are independently and uniformly distributed
over [99, 101]. The variances of X and Y are at most 1
(actually 1/3). But XY will be below (99.5)^2 = 9900.25
at least 1 time in 16, and above (100.5)^2 = 10100.25
at least 1 time in 16, so its variance is far above 1/9.
But if X and Y are independent, so are X^2 and Y^2.
Hence E((XY)^2) = E(X^2) E(Y^2). This and
E(X^2) = E(X)^2 + var(X) leads to
E(XY)^2 + var(XY)
= E((XY)^2)
= E(X^2) * E(Y^2)
= (E(X)^2 + var(X)) * (E(Y)^2 + var(Y))
Cancel the common term E(XY)^2 = E(X)^2 * E(Y)^2 to derive
var(XY) = var(X)*E(Y)^2 + var(Y)*E(X)^2 + var(X)*var(Y)
As a check, if Y is constant (so that var(Y) = 0),
this reduces to var(XY) = var(X)*Y^2.
James's result is correct if E(X) = E(Y) = 0.
--
Peter-Lawrence.Montgomery@cwi.nl Home: San Rafael, California
Microsoft Research and CWI