Download Lecture 5: Expectation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Lecture 5: Expectation
1. Expectations for random variables
1.1
1.2
1.3
1.4
1.5
Expectations for simple random variables
Expectations for bounded random variables
Expectations for general random variables
Expectation as a Lebesgue integral
Riemann and Riemann-Stiltjes integral
2. Expectation and distribution of random variables
2.1
2.2
2.3
2.4
Expectation for transformed discrete random variables
Expectation of transformed continuous random variables
Expectation for a product of independent random variables
Moments of higher order
1. Expectations for random variables
1.1 Expectations for simple random variables
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable.
P =< A1 , . . . , An > is a partition of the sample space Ω, i.e., a
family of sets such that (a) A1 , . . . , An ∈ F, (b) A1 ∪· · ·∪An = Ω,
(c) Ai ∩ Aj = ∅, i 6= j.
Definition 5.1. A random variable X is called a simple
random variable if there exists a partition P = {A1 , . . . , An } of
1
the sample space Ω and real numbers x1 , . . . , xn such that
X(ω) =
n
X
i=1





xi IAi (ω) = 



x1 if ω ∈ A1 ,
...
xn if ω ∈ An .
Definition 5.2. If X is a simple random variable, then its
expectation (expected value) is defined as
EX =
n
X
xi P (Ai ).
i=1
Notations that are often used EX = E[X] = E(X).
Examples
(1) Let X = X(ω) = M, ω ∈ Ω, where M is a constant. Since
< Ω > is a partition, EX = M P (Ω) = M.
(2) Let IA = IA (ω) is a indicator of a random event A, i.e.
a random variables that takes values 1 and 0 on the sets A
and A, respectively. Since < A, A > is a partition, EIA =
1P (A) + 0P (A) = P (A).
Expectation EX of a simple random variable always exists
(take a finite value) and possesses the following properties:
(1) If X = ni=1 xi IAi and Y = m
j=1 yj IBj are two simple
random variables and a and b are any real numbers, then Z =
aX + bY is also a simple random variable and
P
P
EZ = aEX + bEY.
2
——————————(a) If {A1 , . . . , An } and {B1 , . . . , Bm } are two partition of Ω then
{Ai ∩ Bj , i = 1, . . . , n, j = 1, . . . , m} is also a partition of of Ω;
(b) Z = aX + bY =
Pn Pm
i=1 j=1 (axi
+ byj )IAi ∩Bj ;
Pn Pm
i=1 j=1 (axi + byj )P (Ai ∩ Bj )
Pn Pm
Pn Pm
i=1 j=1 axi P (Ai ∩ Bj ) + i=1 j=1 byj P (Ai ∩ Bj )
P
P
Pm
Pn
a ni=1 xi m
j=1 P (Ai ∩ Bj ) + b j=1 yj i=1 P (Ai ∩ Bj )
P
P
a ni=1 xi P (Ai ) + b m
j=1 yj P (Bj ) = aEX + bEY .
(c) EZ =
=
=
=
——————————(2) If X = ni=1 xi IAi is a simple random variable such that
P (X ≥ 0) = 1 then
EX ≥ 0.
P
——————————P (X ≥ 0) = 1 implies that P (Ai ) = 0 if xi < 0. In this case
P
EX = i:xi ≥0 xi P (Ai ) ≥ 0.
——————————(2’) If X and Y are two simple random variables such that
P (X ≤ Y ) = 1 then EX ≤ EY .
——————————P (X ≤ Y ) = 1 ⇔ P (Y − X ≥ 0) = 1 ⇒ E(Y − X) =
EY − EX ≥ 0.
——————————-
3
1.2 Expectations for bounded random variables
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable.
Definition. A random variable X is bounded if there exists
a constant M such that | X(ω) |≤ M for every ω ∈ Ω.
Examples
(1) If Ω = {ω1 , . . . , ωN } is a finite sample space that any random
variable X = X(ω) is bounded.
(2) If Ω = {ω = (ω1 , ω2 , . . .), ωi = 0, 1, i = 1, 2, . . .} is the sample space for infinite series of Bernoulli trials then the random
variable X = X(ω) = min(n ≥ 1 : ωn = 1) (the number of the
first ”successful” trial) is an unbounded random variable while
the random variable Y = Y (ω) = ω1 + · · · + ωn (the number of
successes in first n trials is a bounded random variable.
Definition. If X is a bounded random variable, then its
expectation is defined as
EX = sup EX 0 = inf
EX 00 ,
00
X ≥X
X 0 ≤X
where supremum is taken over simple random variables X 0 ≤ X
while infimum is taken over simple random variables X 00 ≥ X.
To be sure that the definition is meaningful we should prove
that sup and inf in the above definition are equal.
4
——————————( a) The inequality supX 0 ≤X EX 0 ≤ inf X 00 ≥X EX 00 holds because
of any two simple random variables X 0 ≤ X and X 00 ≥ X are
connected by the relation X 0 ≤ X 00 and therefore EX 0 ≤ EX 00 .
(b) Let | X(ω) |< M . Fix a number n and define
Ai = {ω ∈ Ω :
iM
(i − 1)M
< X(ω) ≤
}, −n ≤ i ≤ n.
n
n
Note that Ai ∈ F, i = −n, . . . , n. Define the simple random
variables
Xn0 =
n iM
X
(i − 1)M
IAi , Xn00 =
IAi .
n
i=−n
i=−n n
n
X
By the definition, Xn0 ≤ X ≤ Xn00 . Moreover, Xn00 − Xn0 =
and, therefore EXn00 − EXn0 = M
n . Thus,
inf
EX 00 ≤ EXn00 = EXn0 +
00
X ≥X
M
n
M
M
≤ sup EX 0 + .
n
n
X 0 ≤X
Since, n is an arbitrary, the relation above implies that
inf
EX 00 ≤ sup EX 0 .
00
X ≥X
X 0 ≤X
(c) By (a) and (b) we get supX 0 ≤X EX 0 = inf X 00 ≥X EX 00 .
——————————Expectation of a bounded random variable EX always exists (take a finite value) and possess the properties similar with
those for expectation of a simple random variable:
5
(1) If X and Y are two bounded random variables and a and
b are any real numbers, then Z = aX + bY is also a bounded
random variable and
EZ = aEX + bEY.
——————————(a) Let first prove that EaX = aEX. The case a = 0 is trivial.
The case a < 0 is reduced to the case a > 0 by considering the
random variable −X. If a > 0, then EaX = supaX 0 ≤aX EaX 0 =
supX 0 ≤X aEX 0 = a supX 0 ≤X EX 0 = aEX.
(b) The prove in (1) can be reduced to the case a = b = 1 by
considering the random variables aX and bY . We have
sup
EZ 0 ≥
Z 0 ≤Z=X+Y
sup
E(X 0 + Y 0 ),
X 0 ≤X,Y 0 ≤Y
since X 0 ≤ X and Y 0 ≤ Y implies Z 0 = X 0 + Y 0 ≤ Z = X + Y
and thus the supremum on the right hand side in the above inequality is actually taken over a smaller set.
(c) Using (b) we get EZ = E(X + Y ) ≥ EX + EY . Indeed,
EZ = E(X + Y ) =
EZ 0 ≥
sup
Z 0 ≤Z=X+Y
=
sup
X 0 ≤X,Y 0 ≤Y
sup
E(X 0 + Y 0 )
X 0 ≤X,Y 0 ≤Y
(EX 0 + EY 0 ) = sup EX 0 + sup EY 0 = EX + EY.
X 0 ≤X
Y 0 ≤Y
(d) The reverse inequality follows by considering the random
variables −X and −Y .
——————————-
6
(2) If X is a bounded random variable such that P (X ≥ 0) = 1
then
EX ≥ 0.
——————————Let denote A = {ω : X(ω) ≥ 0}. Let also M ba a constant
that bounds X. Then X(ω) ≥ X0 , ω ∈ Ω where X0 = 0IA (ω) +
(−M )IA (ω) = −M IA (ω) is a simple random variable. Then
EX = sup EX 0 ≥ EX0 = −M P (A) = 0.
X 0 ≤X
——————————(2’) If X and Y are two bounded random variables such that
P (X ≤ Y ) = 1 then EX ≤ EY .
1.3 Expectations for general random variables
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable.
Definition. If X = X(ω) ≥ 0, ω ∈ Ω, i.e., X is a nonnegative random variable, then
EX = sup EX 0 ,
X 0 ≤X
where supremum is taken over all bounded random variables
such that 0 ≤ X 0 ≤ X.
The expectation EX of a non-negative random variable can
take non-negative values or to be equal to infinity.
7
Any random variable X can be decomposed in the difference of two non-negative random variables X + = max(X, 0)
and X − = max(−X, 0) that is
X = X + − X −.
Definition. If X is integrable, i.e., E|X| < ∞ then the its
expectation is defined as,
EX = EX + − EX − .
Definition is correct since 0 ≤ X + , X − ≤ |X| and since |X|
is an integrable, 0 ≤ EX + , EX − < ∞.
Expectation of a random variable EX possess the properties
similar with those for expectation of a simple and bounded random variables:
(1) If X and Y are two integrable random variables and a and
b are any real numbers, then Z = aX + bY is also an integrable
random variable and
EZ = aEX + bEY.
——————————(a) Let first prove that EaX = aEX for the case where a ≥ 0
and X ≥ 0 and one should count the product aEX = 0 if
a = 0, EX = ∞ and aEX = ∞ if a > 0, EX = ∞. The case
a = 0 is trivial since in this case aX ≡ 0 and therefore EaX = 0.
If a > 0 then EaX = supaX 0 ≤aX EaX 0 = supX 0 ≤X aEX 0 =
a supX 0 ≤X EX 0 = aEX.
(b) Let first prove that EaX = aEX for an integrable random
variable X. In this case, the case a ≤ 0 can be reduced to the
8
case a ≥ 0 by considering the random variable −X. If a > 0
then EaX = E(aX)+ − E(aX)− = aEX + − aEX − = aEX.
(c) The prove in (1) for X, Y ≥ 0 can be reduced to the case
a = b = 1 by considering the random variables aX and bY . We
have
sup EZ 0 ≥
sup E(X 0 + Y 0 ),
Z 0 ≤Z=X+Y
X 0 ≤X,Y 0 ≤Y
since X 0 ≤ X and Y 0 ≤ Y implies Z 0 = X 0 + Y 0 ≤ Z = X + Y
and thus the supremum on the right hand side in the above inequality is actually taken over a smaller set.
(d) Using (c) we get EZ = E(X +Y ) ≥ EX +EY for X, Y ≥ 0.
Indeed
EZ = E(X + Y ) =
EZ 0 ≥
sup
Z 0 ≤Z=X+Y
=
sup
E(X 0 + Y 0 )
sup
X 0 ≤X,Y 0 ≤Y
(EX 0 + EY 0 ) = sup EX 0 + sup EY 0 = EX + EY.
X 0 ≤X,Y 0 ≤Y
X 0 ≤X
Y 0 ≤Y
(e) To prove EZ = E(X + Y ) ≤ EX + EY for X, Y ≥ 0 let
us the inequality for non-negative bounded random variables
min(X + Y, n) ≤ min(X, n) + min(Y, n). This implies
E min(X + Y, n) ≤ E min(X, n) + E min(Y, n).
and in sequel
EZ = E(X + Y ) =
EZ 0 = max
sup
Z 0 ≤Z=X+Y
sup
n≥1 Z 0 ≤min(X+Y,n)
EZ 0
= max E min(X + Y, n) ≤ max(E min(X, n) + E min(Y, n))
n≥1
n≥1
9
≤ max E min(X, n) + max E min(Y, n) = EX + EY.
n≥1
n≥1
(f) Finally, to prove EZ = E(X + Y ) = EX + EY for arbitrary
integrable random variables X and Y let us define a random
variable Z with the positive part Z + = X + + Y + and the negative part Z − = X − + Y − . We have
E(X + Y ) = E(X + − X − + Y + − Y − ) = E(Z + − Z − )
= EZ + − EZ − = (EX + + EY + ) − (EX − + EY − ) = EX + EY.
——————————(2) If X is a random variable such that P (X ≥ 0) = 1 then
EX ≥ 0.
——————————Since X0 ≡ 0 is a non-negative bounded random variable
EX = sup EX 0 ≥ EX0 = 0.
X 0 ≤X
——————————(2’) If X and Y are two random variables such that P (X ≤
Y ) = 1 then EX ≤ EY .
1.4 Expectation as a Lebesgue integral
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable defined on the probability space < Ω, F, P >.
10
In fact, EX, as it was defined above, is the Lebesgue integral
for the real-valued function X = X(ω) with respect to measure
P (A) and, therefore, according notations used in the integration
theory,
Z
EX = X(ω)P (dω).
Ω
Also the following notations are used
EX =
Z
Ω
Z
X(ω)P (dω) =
Ω
XdP =
Z
XdP.
Definition 5.3. A finite measure Q(A) defined on σ-algebra
F is a function that can be represented as Q(A) = qP (A), where
P (A) is a probability measure defined on F and q > 0 is a positive constat.
Definition 5.4. The Lebesque integral
the following formula
Z
Ω
XdQ = q
Z
Ω
R
Ω XdQ
is defined by
XdP.
Examples
(1) Lebesque measure m(A) on the Borel σ-algebra of an interval [c, d] which is uniquely determined by its values on intervals m((a, b]) = b − a, c ≤ a ≤ b ≤ d. It can be represented in the form m(A) = qP (A) where q = d − c and P (A)
is a probability measure on the Borel σ-algebra of an interval
11
[c, d], which is uniquely determined by its values on intervals
P ((a, b]) = b−a
d−c , c ≤ a ≤ b ≤ d.
(2) According the above definition [c,d] Xdm = cd Xdm
R
= q [c,d] XdP = qEX, where X should be considered as a random variable defined on the probability space < Ω = [c, d], F =
B([c, d]), P (A) >.
R
R
Definition. A σ-finite measure Q(A) defined on σ-algebra
F is a function of sets for which there exists a sequence of
Ωn ∈ F, Ωn ⊆ Qn+1 , n = 1, 2, . . . , ∪n Ωn = Ω such that Q(Ωn ) <
∞, n = 1, 2, . . . and Q(A) = limn→∞ Q(A ∩ Ωn ).
Definition. The Lebesque integral Ω XdQ is defined for a
random variable X = X(ω) and a σ-finite measure Q, under conR
R
dition that Ωn |X|dQ < ∞, n = 1, 2, . . . and limn→∞ Ωn |X|dQ <
∞, by the following formula
R
Z
Ω
XdQ = n→∞
lim
Z
Ωn
XdQ.
Examples
(1) Lebesque measure m(A) on the Borel σ-algebra of an interval R1 which is uniquely determined by its values on intervals
m((a, b]) = b − a, −∞ ≤ a ≤ b ≤ ∞. It can be represented in
the form m(A) = limn→∞ m(A ∩ [−n, n]), where m(A ∩ [−n, n])
is Lebesgue measure on the interval [−n, n] for every n.
∞
(2) According the above definition R1 Xdm = −∞
Xdm
Rn
Rn
= limn→∞ −n Xdm under condition that −n |X|dm < ∞, n =
Rn
1, 2, . . . and limn→∞ −n
|X|dm < ∞.
R
12
R
1.5 Riemann amd Riemann-Stiltjes integrals
f (x) is a real valued function defined on a real line;
[a, b];
a = xn,0 < xn,1 < · · · < xn,n = b;
d(n) = max1≤k≤n (xn,k − xn,k−1 ) → 0 as n → ∞;
x∗n,k ∈ [xn,k−1 , xn,k ], k = 1, . . . , n, n = 1, 2, . . .;
Sn =
n
X
f (x∗n,k )(xn,k − xn,k−1 ).
k=1
Definition 5.5 Riemann integral ab f (x)dx exists if and only
if there exists the same limn→∞ Sn for any choice of partitions
such that d(n) → 0 as n → ∞ and points x∗n,k . In this case
R
Z b
a
f (x)dx = n→∞
lim Sn .
Definition 5.6 If function f is bounded and Riemann integrable
Rn
on any finite interval, and limn→∞ −n
|f (x)|dx < ∞, then funcR∞
tion f is Riemann integrable on real line and −∞
f (x)dx =
Rn
limn→∞ −n f (x)dx.
Theorem 5.1*. A real-valued bounded Borel function f (x)
defined on a real line is Riemann integrable on [a, b] if and only
if its set of discontinuity points Rf [a, b] has Lebesgue measure
m(Rf [a, b]) = 0.
Theorem 5.2*. If Ω = R1 , and F = B1 and f = f (x) is a
R∞
Riemann integrable function, i.e., −∞
|f (x)|dx < ∞. Then the
13
R∞
−∞ |f (x)|m(dx)
Z ∞
Z ∞
Lebesgue integral
−∞
f (x)dx =
−∞
< ∞ and
f (x)m(dx).
Example
Let D be the set of all irrational points in interval [a, b]. The
function ID (x), a ≤ x ≤ b is a bounded Borel function which is
discontinuous in all points of the interval [a, b]. It is not Riemann integrable. But it is Lebesgue integrable since it is a simR
ple function and [a,b] ID (x)m(dx) = 0 · m([a, b] \ D) + 1 · m(D) =
0 · 0 + 1 · (b − a) = b − a.
f (x) is a real valued function defined on a real line;
G(t) is a real-valued non-decreasing and continuous from the
right function defined on a real line;
G(A) be a measure uniquely defined by function G(x) by relations G((a, b]) = G(b) − G(a), −∞ < a ≤ b < ∞.
[a, b];
a = xn,0 < xn,1 < · · · < xn,n = b;
d(n) = max1≤k≤n (xn,k − xn,k−1 ) → 0 as n → ∞;
x∗n,k ∈ [xn,k−1 , xn,k ], k = 1, . . . , n, n = 1, 2, . . .;
Sn =
n
X
f (x∗n,k )(G(xn,k ) − G(xn,k−1 ).
k=1
Definition 5.7 Riemann-Stiltjes integral ab f (x)dG(x) exists if
and only if there exists the same limn→∞ Sn for any choice of
partitions such that d(n) → 0 as n → ∞ and points x∗n,k . In
this case
Z b
f (x)dG(x) = n→∞
lim Sn .
R
a
14
Definition 5.8 If function f is bounded and Riemann-Stiltjes
Rn
integrable on any finite interval, and limn→∞ −n
|f (x)|dG(x) <
∞, then function f is Riemann-Stiltjes integrable on real line
R∞
Rn
and −∞
f (x)dG(x) = limn→∞ −n
f (x)dG(x).
Theorem 5.3*. A real-valued bounded Borel function f (x)
defined on a real line is Riemann-Stiltjes integrable on [a, b] if
and only if its set of discontinuity points Rf [a, b] has the measure G(Rf [a, b]) = 0.
Theorem 5.4*. If Ω = R1 , and F = B1 and f = f (x) is a
R∞
Riemann-Stiltjes integrable function, i.e., −∞
|f (x)|dG(x) < ∞.
R∞
Then the Lebesgue integral −∞ |f (x)|G(dx) < ∞ and
Z ∞
−∞
f (x)dG(x) =
Z ∞
−∞
f (x)G(dx).
2. Expectation and distribution of random variables
2.1 Expectation for transformed discrete random variables
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable defined on the probability space < Ω, F, P >.
g(x) is a Borel real-valued function defined on a real line.
Y = g(X) is a transformed random variable.
15
Definition 5.9. A random variable X is a discrete random
variable if there exists a finite or countable set of real numbers
P
{xn } such that n pX (xn ) = 1, where pX (xn ) = P (X = xn ) = 1.
Theorem 5.5**. Let X be a discrete random variable. Then
EY = Eg(X) =
Z
Ω
g(X(ω))P (dω) =
X
n
g(xn )pX (xn ).
Examples
(1) Let Ω = {ω1 , . . . , ωN } is a discrete sample space, F = F0
is the σ-algebra of all subsets of Ω and P (A) is a probability
P
measure, which is given by the formula P (A) = ωi ∈A pi , where
p(ωi ) = P (Ai ) ≥ 0, i = 1, . . . N are probabilities of one-points
P
events Ai = {ωi } satisfying the relation ωi ∈Ω p(ωi ) = 1.
A random variable X = X(ω) and a transformed random variable Y = g(X) are, in this case, simple random variables since
P
< A1 , . . . AN > is a partition of Ω and X = ωi Ω X(ωi )IAi and
P
Y = ωi Ω g(X(ωi ))IAi .
In this case,
pX (xj ) = P (X = xj ) =
X
p(ωi )
ωi :X(ωi )=xj
and, according the definition of expectation and Theorem 1,
EY = Eg(X) =
X
g(X(ω))p(ωi ) =
ωi ∈Ω
X
n
g(xn )pX (xn ).
(2) Let Ω = {ω = (ω1 , . . . , ωn )}, ωi = 0, 1, i = 1, . . . , n} is a discrete sample space, for series of n Bernoulli trials. In this case
Q
p(ω) = ni=1 pωi q 1−ωi where p, q > 0, p + q = 1.
16
Let X(ω) = ω1 + · · · + ωn be the number of successes in n
trials. In this case,xj = j, j = 0, . . . , n and
pX (j) = P (X = j) =
X
pj q n−j = Cnj pj q n−j , j = 0, . . . , n,
ω:X(ω)=j
n!
, and, according the definition of expectation
where Cnj = j!(n−j)!
and Theorem 3,
EX =
X
X(ω)p(ω) =
n
X
jPX (j) = np.
j=0
ω∈Ω
(3) Let X is a Poisson random variable, i.e., PX (n) =
0, 1, . . .. Then,
∞
X
e−λ λn
n
EX =
= λ.
n!
n=0
e−λ λn
n! , n
=
2.2 Expectation for transformed continuous random variables
< Ω, F, P > is a probability space;
X = X(ω) is a real valued random variable defined on the probability space < Ω, F, P >.
PX (A) = P (X ∈ A) and FX (x) = P (X ≤ x) are, respectively,
the distribution and the distribution function for the random
variable X.
g(x) is a Borel real-valued function defined on a real line.
Y = g(X) is a transformed random variable.
17
Theorem 5.6**. Let X be a random variable. Then
EY = Eg(X) =
Z
g(X(ω))P (dω) =
Ω
Z ∞
−∞
g(x)PX (dx).
Definition 5.10 A random variable X is a continuous random variable if there exists a non-negative Borel function fX (x)
R∞
defined on a real line such that −∞
f (x)m(dx) = 1 such that
FX (x) =
Z x
−∞
fX (y)m(dy), x ∈ R1 .
The function fX (x) is called the probability density of the random variable X (or the distribution function FX (x)).
According Theorem 5.2, if fX (x) is a Riemann integrable funcR∞
tion, i.e., −∞
f (x)dx = 1 then
FX (x) =
Z x
−∞
fX (y)m(dy) =
Z x
−∞
fX (y)dy, x ∈ R1 .
Theorem 5.7**. Let X be a continuous random variable
with a probability density f . Then
EY = Eg(X) =
=
Z ∞
−∞
Z
g(x)PX (dx) =
g(X(ω))P (dω)
Ω
Z ∞
−∞
g(x)f (x)m(dx).
According Theorem 5.2, if g(x)fX (x) is a Riemann integrable
R∞
function, i.e., −∞
|g(x)fX (x)|dx < ∞ then
EY = Eg(X) =
Z
Ω
g(X(ω))P (dω) =
18
Z ∞
−∞
g(x)PX (dx)
=
Z ∞
g(x)fX (x)m(dx) =
−∞
Z ∞
−∞
g(x)fX (x)dx.
Examples
(1) Let Ω = [0, T ] × [0, T ], F = B(Ω), m(A) is the Lebesgue
measure on B(Ω), which is uniquely determined by its values on
rectangles m([a, b] × [c, d]) = (b − a)(d − c) (m(A) is the area of
a Borel set A). Let also the corresponding probability measure
P (A) = m(A)
T 2 . Let the random variable X(ω) = |ω1 − ω2 |, ω =
(ω1 , ω2 ) ∈ Ω. Find the EX.
(1’) EX =
=
1
T2
1
T2
R
Ω |ω1 − ω2 |m(dω)
R
[0,T ]×[0,T ] |ω1 − ω2 |dω1 dω2
=?
2
2
−x)
(1”) The distribution function FX (x) = P (X ≤ x) = T −(T
T2
= 1 − (1 − Tx )2 , 0 ≤ x ≤ T . It has a continuous (and, therefore,
Riemann integrable probability density) fX (x) = T2 (1 − Tx ), 0 ≤
R
x ≤ T . Thus, EX = 0T x T2 (1 − Tx )dx = T3 .
(2) Let X = X(ω) be a random variable defined on a probability space < Ω, F, P > with the distribution function F (x) =
P (X ≤ x) and the distribution F (A) = P (X ∈ A). Then
EX =
Z
Ω
X(ω)P (dω) =
Z
R1
xF (dx) =
Z ∞
−∞
xdF (x)
.
(3) Let X be a non-negative random variable. Then the above
formula can be transformed to the following form
EX =
Z
[0,∞)
xF (dx) =
Z ∞
0
xdF (x) =
19
Z ∞
0
(1 − F (x))dx.
——————————R
R
(a) 0∞ xdF (x) = limA→∞ 0A xdF (x);
R∞
RA
0 (1 − F (x))dx = limA→∞ 0 (1 − F (x))dx;
R
R
(c) 0A xdF (x) = −A(1 − F (A)) + 0A (1 − F (x))dx;
R
R
(d) 0∞ (1 − F (x))dx < ∞ ⇒ 0∞ xdF (x) < ∞
R
(e)A(1 − F (A)) ≤ A∞ xdF (x);
R
R
(f) 0∞ xdF (x) < ∞ ⇒ 0∞ (1 − F (x))dx < ∞;
R
R
(g) (a) - (c) ⇒ 0∞ xdF (x) = 0∞ (1 − F (x))dx.
(b)
——————————2.3 Expectation for product of independent random
variables
Theorem 5.8. If X and Y are two independent random
variables and E|X|, E|Y | < ∞. Then E|XY | < ∞ and
EXY = EXEY.
——————————P
P
(a) Let X = ni=1 xi IAi and Y = m
j=1 yj IBj are two simple indeP
P
pendent random variables. Then XY = ni=1 m
j=1 xi yj IAi ∩Bj is
also a simple random variable and, therefore,
EXY =
n X
m
X
xi yj P (Ai ∩ Bj ) =
i=1 j=1
=
n
X
i=1
n X
m
X
xi yj P (Ai )P (Bj )
i=1 j=1
xi P (Ai )
m
X
yj P (Bj ) = EXEY.
j=1
20
(b) The proof for bounded and general random variables analogous to those proof given for the linear property of expectations.
——————————2.4 Moments of higher order
Let X = X(ω) be a random variable defined on a probability space < Ω, F, P > with the distribution function FX (x) =
P (X ≤ x) and the distribution FX (A) = P (X ∈ A);
Let also Y = X n and the distribution function FY (y) = P (X n ≤
y) and the distribution FY (A) = P (X n ∈ A);
Definition 5.11 The moment of the order n for the random
variable X is the expectation of random variable Y = X n
n
EX =
Z
Ω
n
X(ω) P (dω) =
=
Z
R1
Z
n
R1
x FX (dx) =
yFY (dy) =
Z ∞
−∞
Z ∞
−∞
xn dFX (x)
ydFY (y)
.
LN Problems
1. Let X is a discrete random variable that taking nonnegative integer values 0, 1, 2, . . .. Prove that
EX =
∞
X
P (X ≥ n).
n=1
2. Let X is a non-negative random variable and F (x) =
P (X ≤ x). Prove that
EX n = n
Z ∞
0
xn−1 (1 − F (x))dx.
21
3. Let X be a geometric random variable that take values
n = 0, 1, . . . with probabilities P (X = n) = qpn−1 , n = 0, 1, . . ..
Please find: (a) P (X ≥ n); (b) EX.
4. The random variable X has a Poisson distribution with
1
parameter λ > 0. Please find E 1+X
.
5 Let X1 , . . . , Xn be independent random variables uniformly
distributed in the interval [0, T ] and Zn = max(X1 , . . . , Xn ).
Please find: (a)P (Zn ≤ x), (b) EZn , (c) E(Zn − T )2 .
6 Let X1 , . . . , Xn be independent random variables uniformly
n
distributed in the interval [0, T ] and Yn = 2 X1 +···+X
. Please
n
2
find: (a) EYn ; (b) E(Yn − T ) .
7 Let V arX = E(X − EX)2 < ∞. Please prove that (a)
V arX = EX 2 − (EX)2 , (b) V arX = inf a∈R1 E(X − a)2 .
8 Let X and Y are independent random variables with V arX,
V arY < ∞. Please prove that V ar(X + Y ) = V arX +V arY .
9 Let X ≥ 0 is a continuous non-negative random variable
R
with EX 2 < ∞. Please prove that EX2 = 0∞ x2 fX (x)dx =
R
2 0∞ x(1 − FX (x))dx.
10 Let a random variable X has an exponential distribution
FX (x) = I(x ≥ 0)(1 − e−λx ). Please find EX and V arX =
E(X − EX)2 .
22
Related documents