Download X - IDA.LiU.se

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Outline
 Order statistics
 Distribution of order variables (and extremes)
 Joint distribution of order variables (and extremes)
Probability theory 2008
Order statistics
Let X1, …, Xn be a (random) sample and set
X(k) = the kth smallest of X1, …, Xn
Then the ordered sample
(X(1), X(2), …, X(n)) is called the order statistic of (X1, …, Xn)
and
X(k) the kth order variable
Probability theory 2008
Order variables - examples
Example 1: Let X1, …, Xn be U(0,1) random numbers. Find the
probability that
max(X1, …, Xn ) > 1 – 1/n
Example 2: Let X1, …, X100 be a simple random sample from a
(finite) population with median m. Find the probability that
X(40) > m.
Probability theory 2008
Distribution of
the extreme order variables
n
FX ( n ) ( x)  P( X 1  x, ..., X n  x)   P( X k  x)  ( F ( x)) n
k 1
f X ( n ) ( x)  ...
n
FX (1) ( x)  1  P( X 1  x, ..., X n  x)  1   P( X k  x)  1  (1  F ( x)) n
k 1
f X (1) ( x)  ...
Probability theory 2008
The beta distribution
For integer-valued r and s, the beta distribution
represents the rth highest of a sample of r+s-1
independent random variables uniformly
distributed on (0,1)
 (r , s), r , s  0
f ( x) 
Γ (r  s) r 1
x (1  x) s 1 , 0  x  1
Γ (r ) Γ ( s)
E( X ) 
r
rs
Probability theory 2008
=r =s
The gamma function

Γ (r )   x r 1 exp(  x) dx, r  0
0
 Γ (1)  1

 Γ (r  1)  rΓ (r ), r  0
Γ (n)  (n  1)!
Probability theory 2008
Distribution of
arbitrary order variables
n
FX ( k ) ( x)  P({exactly i of the variables X 1 , ..., X n  x})
i k
n
   ( F ( x)) i (1  F ( x)) n i
i k  i 
n
Probability theory 2008
A useful identity
z
n i
Γ (n  1)
n i
k 1
nk
  z (1  z ) 
y
(
1

y
)
dy


Γ (k ) Γ (n  1  k ) 0
ik  i 
n
for k  1, ..., n and 0  z  1
Can be proven by backward induction
Probability theory 2008
Distribution of
arbitrary order variables
n
FX ( k ) ( x)    ( F ( x)) i (1  F ( x)) n i
i k  i 
n
Γ (n  1)

Γ (k ) Γ (n  1  k )
F ( x)
k 1
nk
y
(
1

y
)
dy

0
that is
F X ( k ) ( x)  F ( k ,n 1 k ) ( F ( x)), k  1, ..., n
Probability theory 2008
Distribution of arbitrary order variables
from a U(0,1) distribution
Γ (n  1)
k 1
nk
FX ( k ) ( x) 
y
(
1

y
)
dy

Γ (k ) Γ (n  1  k ) 0
x
that is
F X ( k ) ( x)  F ( k ,n 1 k ) ( x), k  1, ..., n
Probability theory 2008
Joint distribution of the extreme order variables
n(n  1)( F ( y )  F ( x)) n  2 f ( y ) f ( x), for x  y
f X (1 ) , X ( n ) ( x, y )  
0, otherwise
Proof :
FX (1) , X ( n ) ( x, y )  P( X (1)  x, X ( n )  y )
 P( X ( n )  y )  P( X (1)  x, X ( n )  y )  ...
Probability theory 2008
Functions of random variables
Let X have an arbitrary continuous distribution, and suppose
that g is a (differentiable) strictly increasing function. Set
Y  g( X )
Then
FY ( y)  P(Y  y)  P( X  g 1 ( y))  FX ( g 1 ( y))
and
d 1
d 1
1
fY ( y )  f X ( g ( y )) g ( y )  f X ( g ( y )) g ( y )
dy
dy
1
Linear functions of random vectors
Let (X1, X2) have a uniform distribution on
D = {(x , y); 0 < x <1, 0 < y <1}
Set
Then
.
 a1   b11 b12  X 1 
 
Y  a  BX     
 a2   b21 b22  X 2 
 1
 | det( B 1 ) |

f (Y1 ,Y2 ) ( y1 , y2 )  | det( B) |

0, otherwise
Functions of random vectors
Let (X1, X2) have an arbitrary continuous distribution, and
suppose that g is a (differentiable) one-to-one transformation.
Set
(Y1 , Y2 )  g ( X1 , X 2 )
Then
x1
f (Y1 ,Y2 ) ( y1 , y2 )  f ( X1 , X 2 ) (h1 ( y1 , y2 ), h2 ( y1 , y2 )) xy1
2
y1
where h is the inverse of g.
Proof: Use the variable transformation theorem
x1
y2
x2
y2
Density of the range
Consider the bivariate injection
U  X (1)
 X (1)  h1 (U , R)  U


R  X ( n )  X (1)
 X ( n )  h2 (U , R)  U  R
Then
1 0
J
1
1 1
fU , R (u, r )  f X (1) , X ( n ) (u, u  r )
and

f R (r ) 
f
X (1) , X ( n )
(u, u  r ) du

Probability theory 2008
Density of the range

f Rn (r )  n(n  1)  ( F (u  r )  F (u )) n  2 f (u  r ) f (u )dr , r  0

Probability theory 2008
The range of a sample from an exponential
distribution with mean one
f Rn (r )  (n  1)(1  exp( r )) n  2 exp( r ), r  0
FRn (r )  (1  exp( r )) n 1  ( F (r )) n 1 , r  0
Probabilistic interpretation of
the last equation?
Probability theory 2008
Joint distribution of the order statistic
Consider the mapping
(X1, …, Xn )  (X(1), …, X(n))
or
.
 X (1) 
 X1 


 
 . 
 . 
 .   P . 


 
 . 
 . 
X 
X 
(
n
)
 n


where P is a permutation matrix
Probability theory 2008
Joint density of the order statistic
 n
n! f ( yk ), if y1  ...  yn
f X (1) , ..., X ( n ) ( y1 , ..., yn )   k 1

0, otherwise
Probability theory 2008
Exercises: Chapter IV
4.2, 4.7, 4.9, 4.12, 4.14, 4.17
Probability theory 2008
Related documents