Download Chapter 6 Continuous Random Variables § 6.1 Probability Density

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
_________________________________________________________________________________
Chapter 6 Continuous Random Variables
§ 6.1 Probability Density Functions
• Definition: Let X be a random variable. Suppose there
exists a nonnegative real-valued function f:R->[0,) such
that for any subset of real number A which can be
constructed from intervals by a countable number of set
operations,
P(X A) =  A f(x) dx.
Then X is called absolutely continuous, or simply
continuous. The function f is called the probability
density function (pdf) of X .
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• Properties of the pdf
(a) F(t) = P(- < X  t) = -t f(x) dx
(b) Since F() = 1, so
- f(x) dx = 1.
(c) If f is continuous, then F'(x) = f(x).
(d) For real a,b, a b ,
b
P(a  X  b) = a f(x) dx
(e) P(a < X < b) = P(a < X  b) = P(a  X < b)
b
= P(a  X  b) = a f(x) dx
* Example 6.1 Let the pdf of X be
f(x) = xe-x
0
x>0
otherwise.
(a) Find 
(b) Find cdf of X
(c) Find P(2 < X < 5) and P (X > 7)
Solution: Since - f(x) dx
= 1, thus
(a)0 xe-x dx = 1 => [-(x+1)e-x ]0 = 1 =>  = 1.
t
(b) F(t) = 0 f(x) dx
= [-(x+1)e-x ]0
__________________________________________________________
t
= -(t+1)e-t +1. Thus
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
F(t)
=0
= -(t+1)e-t +1
t<0
t0
(c) P(2 < X < 5) = F(5) - F(2) and P (X > 7) = 1 - F(7) .
* see Example 6.2
§ 6.2 Density Function of a Function of a
random Variable
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• To find the density function of a new random variable
Y, which is a function of a known random variable X,
there are two methods: (a) Find the cdf of Y and then
differentiate, (b) find fY directly from fX (Method of
transformation, change of variable).
* Example 6.3 Find the pdf of Y, where Y=X2 and
fX(x) = 2/x2, if 1<x<2.
Solution: Let Fy and fY be the cdf and pdf of Y,
respectively.
Fy(t) = P(Y  t) = P(X2  t) = P(-(t)1/2  X  (t)1/2)
0.5
= P(1  X  (t)1/2) = 1(t) 2/x2 dx = 2 - 2/(t)0.5 and
Fy(t) = 0
t<1
2 - 2/(t)0.5
1
1 t < 4
t4
and
fY (t) = Fy'(t) = (t)-1.5
=0
1t4
otherwise
* see Examples 6.4 and 6.5.
• Theorem 6.1: (Method of Transformation)
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
Let X be a continuous random variable with density
function fX and set of possible values A. For the invertible
function (strictly increasing or strictly decreasing)
h:A -> R, let Y = h(X) be a random variable with the set of
possible values h(A) = B. Suppose the inverse of y=h(x) is
the function x=h-1(y), which is differentiable for all values
of B. Then
y  B.
fY(y) = fX(h-1(y)|( h-1)'(y)|
Proof: If h(x) is strictly increasing, then
FY(y) = P{h(X)  y} = P{X  h-1 (y)} = FX(h-1 (y))
After differentiation, we have
fY(y) = fX(h-1 (y)) ( h-1)' (y) = fX(h-1 (y))|( h-1)' (y)|
The same is true when h(x) is strictly decreasing.
* Example 6.6 Let X be a random variable with the
density function
fX(x) = 2 e-2x
=0
Find the pdf of Y=X0.5.
if x > 0
otherwise
Solution: h(x) = x0.5, so x = h-1 (y) = y2, and
2
fY(y) = fX(h-1 (y))| h-1'(y)| = 2 e-2y |2y|
y>0
=0
otherwise
* see Example 6.7
§ 6.3 Expectations and Variances
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• Definition: If X is a continuous random variable
(c.r.v.) with probability density function f, the expected
value of X is defined by
E(X)
=
- xf(x) dx .
* X is said to have a finite expected value if the above
integral converges absolutely, i.e.,
- |xf(x)|dx
<
* Example 6.9 (Cauchy density) A random variable with
density function
f(x) = c/(1+x2),
- < x < .
(a) Find c.
(b) Show that E(X) does not exist.
Solution:
(a) Note that
- c/(1+x2) dx
(b)
= 1 => c[tan-1 x] = c = 1
-|x|/(1+x2) |dx = 2 0x/(1+x2) dx
= (1/)[ln(1+x2)]0
=
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• Theorem 6.2: For any continuous random variable X
with probability distribution function F and density
function f,
E(X) = 0 (1-F(t)) dt - 0 F(-t)) dt
Proof: see text.
Note if X is a nonnegative random variable, then
E(X) = 0 P(X>t) dt
• Theorem 6.3: Let X be a continuous nonnegative
random variable with probability density function f(x);
then for any function h:R->R,
E[h(X)] = - h(x)f(x) dx
Proof: see text.
Corollary: Let X be a continuous random variable with
probability function f(x). Let h1, h2, …, hn be real-valued
functions, and 1, 2, .., n be real numbers. Then
E[1 h1(X) + 2 h2(X) + …+ n hn(X)]
= 1E[h1(X)] + 2 E[h2(X)] + …+ n E[hn(X)]
Proof: The same as in the discrete case, just replace the
summations by integrals.
see Examples 6.10
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
Variances of Continuous Random Variables
• Definition: If X is a continuous random variable with
E(X) = , then X and Var(X), called the standard
deviation and the variance of X, respectively, are defined
by
X = {E[ (X-)2 ]}0.5
Var(X) = E[ (X-)2 ]
As in the discrete case, we have
Var(X) = E[X2] - E[X] 2
Var(aX+b) = a2Var(X) and aX+b = |a|X
* see Example 6.11
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
Related documents