Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
STA 4032 Midterm 2 Formula Sheet Chapter 4. Continuous Distributions Let X be a continuous random variable with probability density function f (x). The cumulative distribution function of X is defined by Z x F (x) = P (X ≤ x) = f (y)dy. −∞ We Rhave the following formulae: ∞ (1) −∞ f (x)dx = 1. Rb (2) P (a < X ≤ b) = a f (x)dx = F (b) − F (a). (3) Z ∞ µ = E(X) = xf (x)dx. −∞ (4) 2 2 Z ∞ σ = V (X) = E[(X − E(X)) ] = (x − µ)2 f (x)dx −∞ (5) σ 2 = E(X 2 ) − µ2 . (6) For any function h, Z ∞ E[h(X)] = h(x)f (x)dx −∞ Some Special Continuous Distributions: (1) Uniform distribution: p.d.f. f (x) = 1/(b − a), a ≤ x ≤ b; Otherwise, f (x) = 0. (2) Let X have a normal distribution N (µ, σ 2 ). Then, E[X] = µ and V ar(X) = σ 2 . Let Z = (X − µ)/σ. Then, Z has a standard normal distribution N (0, 1) with µ = 0 and σ = 1. The distribution function of X can be expressed by X −µ a−µ FX (a) = P (X ≤ a) = P ≤ σ σ a−µ a−µ = P Z≤ =Φ , σ σ where Φ is the distribution function of Z. We also have b−µ a−µ P (a ≤ X ≤ b) = F (b) − F (a) = Φ −Φ . σ σ Normal Approximation to the Binomial Distributions p If X is a binomial random variable with parameters n and p, then Z = (X − np)/ np(1 − p) is approximately a standard normal random variable. Continuity correction: ! x + 0.5 − np P (X ≤ x) = P (X ≤ x + 0.5) ≈ P Z ≤ p np(1 − p) ! x − 0.5 − np P (X ≥ x) = P (X ≥ x − 0.5) ≈ P Z ≥ p np(1 − p) 1 (3) Exponential distribution with p.d.f. f (x) = λe−λx , 0 ≤ x < ∞ (λ > 0). The cumulative probability distribution function is given by 1 − e−λx , x ≥ 0 F (x) = 0, otherwise. The expected value and variance of X are E(X) = 1 , λ V (X) = 1 . λ2 It satisfies the memoryless property, for positive s and t, P (X > s + t|X > t) = P (X > s). (4) Weibull distribution: its p.d.f. is defined by β x β−1 x β f (x) = exp − , 0 ≤ x < ∞, δ δ δ with scale parameter δ > 0 and shape parameter β > 0. Its cumulative distribution function is β F (x) = P (X ≤ x) = 1 − e−( δ ) . x Its mean µ = E(X) = δΓ(1 + 1/β) and variance 2 σ 2 = V (X) = δ 2 Γ(1 + 2/β) − δ 2 [Γ(1 + 1/β)] . Chapter 5. Joint Probability Distributions (Sections 5-1 and 5-4) 5-1. Two or More Random Variables The random variables X and Y are said to be jointly continuous if there is a nonnegative function fXY (x, y), called the joint probability density function, such that (1) fXY (x, y) ≥ 0 for all x, y (2) Z Z ∞ ∞ −∞ −∞ fXY (x, y)dxdy = 1 (3) For any region R in the xy-plane, Z Z P ((X, Y ) ∈ R) = fXY (x, y)dxdy R The marginal density functions of X and Y are given by Z ∞ Z fX (x) = fXY (x, y)dy, fY (y) = −∞ ∞ fXY (x, y)dx −∞ Conditional Distributions If X and Y are jointly continuous with joint density function fXY (x, y), then the conditional probability density function of Y given that X = x is given by fY |x (y) = fXY (x, y) fX (x) 2 and the conditional probability density function of X given that Y = y is given by fXY (x, y) . fY (y) fX|y (x) = Some related formulas: Rb (1) P (a < X < b|Y = y) = a fX|y (x)dx. Rb (2) P (a < Y < b|X = x) = a fY |x (y)dy. R∞ (3) µY |x = E(Y |x) = −∞ yfY |x (y)dy. R∞ R∞ (4) V (Y |x) = −∞ [y − E(Y |x)]2 fY |x (y)dx = −∞ y 2 fY |x (y)dy − µ2Y |x .. Independent Random Variables Two continuous random variables X and Y are independent if fXY (x, y) = fX (x)fY (y). 5-4. Linear Functions of Random Variables Given random variables X1 , X2 , . . . , Xp and constants c1 , c2 , . . . , cp , Y = c1 X1 + c2 X2 + · · · + cp Xp is a linear combination of X1 , X2 , . . . , Xp . Some related formulas: (1) E(Y ) = c1 E(X1 ) + c2 E(X2 ) + · · · + cp E(Xp ). (2) If X1 , X2 , . . . , Xp are independent, then V (Y ) = c21 V (X1 ) + c22 V (X2 ) + · · · + c2p V (Xp ). (3) For sample mean X̄ = (X1 + X2 + · · · + Xp )/p with E(Xi ) = µ for i = 1, 2, . . . , p, we have E(X̄) = µ. If X1 , X2 , . . . , Xp are also independent with V (Xi ) = σ 2 for i = 1, 2, . . . , p, we have V (X̄) = σ2 . p (4) If X1 , X2 , . . . , Xp are independent, normal random variables with E(Xi ) = µi and V (Xi ) = σi2 for i = 1, 2, . . . , p, then Y = c1 X1 + c2 X2 + · · · + cp Xp is a normal random variable with E(Y ) = c1 µ1 + c2 µ2 + · · · + cp µp and V (Y ) = c21 σ12 + c22 σ22 + · · · + c2p σp2 . 3