Download Notes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Notes: Mathematical Expectation
MATHEMATICAL EXPECTATION
Mean of a Random Variable
Definition 4.1 Let X be a random variable with
probability distribution f(x). The mean or expected
value of X is
µ = E(X) = ∑ x f(x)
x
if X is discrete, and
∞
µ = E(X) = ∫ x f(x) dx
-∞
if X is continuous.
Theorem 4.1 Let X be a random variable with
probability distribution f(x). The mean or expected
value of the random variable g(X) is
µg(X) = E[g(X)] = ∑ g(x) f(x)
if X is discrete, and
∞
µg(X) = E[g(X)] = ∫ g(x) f(x) dx
-∞
if X is continuous.
ENGSTAT Notes of AM Fillone
Notes: Mathematical Expectation
Definition 4.2 Let X and Y be random variables with
joint probability distribution f(x,y). The mean or
expected value of the random variable g(X,Y) is
µg(X,Y) = E[g(X,Y)] = ∑ ∑ g(x,y) f(x,y)
x
y
if X and Y are discrete, and
∞ ∞
µg(X,Y) = E[g(X,Y)] = ∫ ∫ g(x,y) f(x,y) dx dy
-∞ -∞
if X and Y are continuous.
Note that if g(X, Y) = X in Definition 3.2, we have
 ∑ ∑ x f(x,y) = ∑ x g(x) (discrete case)
x
 x y
E(X) =  ∞ ∞
∞
 ∫ ∫ x f(x,y) dx dy = ∫ x g(x) dx (cont. case)
-∞
 -∞ -∞
where g(x) is the marginal distributions of X.
Therefore, in calculating E(X) over a two-dimensional
space, one may use either the joint probability
distribution of X and Y or the marginal distribution of
X. Similarly, we define
ENGSTAT Notes of AM Fillone
Notes: Mathematical Expectation
 ∑ ∑ y f(x,y) = ∑ y h(y)
(discrete case)
x
 x y
E(Y) =  ∞ ∞
∞
 ∫ ∫ y f(x,y) dx dy = ∫ y h(y) dy (cont.case)
-∞
 -∞ -∞
Variance and Covariance
Definition 4.3 Let X be a random variable with
probability distribution f(x) and mean µ. The variance
of X is
σ2 = E[(X - µ)2] = ∑ (x - µ)2 f(x)
x
if X is discrete, and
∞
σ = E[(X - µ) ] = ∫ (x - µ)2 f(x) dx
2
2
-∞
if X is continuous. The positive square root of the
variance, σ, is called the standard deviation of X.
Theorem 4.2 The variance of random variable X is
given by
σ2 = E(X2) - µ2.
ENGSTAT Notes of AM Fillone
Notes: Mathematical Expectation
Theorem 4.3 Let X be a random variable with
probability distribution f(x). The variance of the
random variable g(X) is
σ2 = E {[g(X) - µg(X) ]2} = ∑ [g(x) - µg(X)]2 f(x)
g(X)
x
if X is discrete, and
∞
σ = E {[g(X) - µg(X) ] } = ∫ [g(x) - µg(X)]2 f(x) dx
2
2
g(X)
-∞
if X is continuous.
Definition 4.4 Let X and Y be random variables with
joint probability distribution f(x,y). The covariance of
X and Y is
σXY = E[(X - µX)(Y - µY)] = ∑ ∑( x - µX )( y - µY )f(x,y)
x
y
if X and Y are discrete, and
∞ ∞
σXY = E[(X - µX)(Y - µY)] = ∫ ∫(x-µX)(y-µY)f(x,y)dxdy
-∞ -∞
if X and Y are continuous.
ENGSTAT Notes of AM Fillone
Notes: Mathematical Expectation
Theorem 4.4 The covariance of two random variables
X and Y with means µX and µY, respectively, is given
by
σXY = E(XY) - µX µY.
Means and Variances of Linear Combinations of
Random Variables
Theorem 4.5 If a and b are constant, then
E(aX + b) = aE(X) + b.
Theorem 4.6 The expected value of the sum of
difference of two or more functions of a random
variable X is the sum or difference of the expected
values of the functions. That is,
E[g(X) ± h(X)] = E[g(X)] ± E[h(X)].
Theorem 4.7 The expected value of the sum or
difference of two or more functions of the random
variables X and Y is the sum or difference of the
expected values of the functions. That is,
E[g(X,Y) ± h(X,Y)] = E[g(X,Y)] ± E[h(X,Y)].
ENGSTAT Notes of AM Fillone
Notes: Mathematical Expectation
Theorem 4.8 Let X and Y be two independent random
variables. Then
E(XY) = E(X)E(Y).
Theorem 4.9 If a and b are constants, then
σ2aX+b = a2σ2X = a2σ2
Theorem 4.10 If X and Y are random variables with
joint probability distribution f(x,y), then
σ2aX+bY = a2σ2X + b2σ2Y + 2 abσXY.
Chebyshev’s Theorem
Theorem 4.11 (Chebyshev’s Theorem) The
probability that any random variable X will assume a
value within k standard deviation of the mean is at least
1 - 1/k2. That is
P(µ - kσ < X < µ + kσ) ≥ 1 - 1/k2.
ENGSTAT Notes of AM Fillone
Related documents