From: hrubin@odds.stat.purdue.edu (Herman Rubin)
Subject: Re: How to find distribution for...
Date: 4 Sep 1999 15:33:13 -0500
Newsgroups: sci.math
Keywords: combinations of independent random variables
In article <37D16CB6.B39DBBE7@eunet.yu>,
Filip Miletic wrote:
>Hello all.
>I'm having trouble with the following problem:
>Given three independent random variables X, Y, Z with standard normal
>distributions
> N(0,1),
>find the distribution of:
>U = ( X + YZ) / ( SQRT( 1 + Z^2)
>Is there an (easy) way to do this? Seems that applying the inversion
>theorem gets
>you nowhere since you are stuck with common distribution density which
>seems impossible
>to integrate. I suspect there might be an elegant ad-hoc solution.
There is a very simple way to do this, and it should also drop
out with integration.
If A_j are independent N(0,1) and c_j are constants, then
\sum A_j*c_j / \sqrt(\sum c_j^2) is N(0,1). If now the c_j, or
some of them, are replaced by random variables independent of
the A_j, the result remains from the fact that probability is
the expectation of conditional probability.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
==============================================================================
From: "Noel Vaillant"
Subject: Re: How to find distribution for...
Date: Sat, 4 Sep 1999 22:50:02 +0100
Newsgroups: sci.math
> There is a very simple way to do this, and it should also drop
> out with integration.
>
> If A_j are independent N(0,1) and c_j are constants, then
> \sum A_j*c_j / \sqrt(\sum c_j^2) is N(0,1). If now the c_j, or
> some of them, are replaced by random variables independent of
> the A_j, the result remains from the fact that probability is
> the expectation of conditional probability.
This is excellent ! (I was sweating heavily)
Specifically, the independence between Z and (X,Y) is useful
when writing that for any bounded borel function f, the
conditional expectation E[f(X,Y,Z)|Z]=g(Z) where, g is defined
as g(z)=E[f(X,Y,z)].
Given a borel set A in R, take f(x,y,z)=1_A((x+yz)/sqrt(1+z^2))
where 1_A is the characteristic of A. Then:
P(U in A) = E[f(X,Y,Z)]=E[ E[f(X,Y,Z)|Z] ] = E[g(Z)]
= \int g(z) dm(z) (m being the distribution of Z, (measure
on R))
But: g(z) = P(U_z in A) , where U_z = (X+zY)/(1+z^2)
As pointed out by Mr Rubin , U_z is N(0,1) (Using the independence
between X and Y). So g(z) = N(0,1)(A) (this is a constant)
Finally P(U in A) = N(0,1)(A). Hence the distribution of U is N(0,1).
Note that I am just stating more explictely what was already said
in the previous post. Nothing original. I hope this may be useful to
some.
Regards. Noel.
-------------------------------------------
Dr Noel Vaillant
http://www.probability.net
vaillant@probability.net