Download Solutions - U.I.U.C. Math

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
ID
NAME
SCORE
MATH 564/STAT 555 Applied Stochastic Processes
Homework 1, September 4, 2015
Due September 11, 2015
1. (2 points) Let (Ω, F, P) be a probability space and A ∈ F an event with
P(A) > 0. Events
B and C are said to be conditionally independent given A if P (B ∩ C A) = P(B|A)P(C A).
Show that B and C are conditionally
independent given
A if and only if P(B|C ∩A) = P(B|A).
Solution: Assume that P(B ∩ C A) = P(B|A)P(C A). Then
P(B ∩ C | A)P(A)
P(B ∩ C | A)
P(B ∩ C ∩ A)
=
=
P(C ∩ A)
P(C | A)P(A)
P(C | A)
P(B | A)P(C | A)
=
= P(B | A) .
P(C | A)
P(B | C ∩ A) =
Conversely, if P(B|C ∩ A) = P(B|A), then
P(B ∩ C ∩ A)
P(B|C ∩ A)P(C ∩ A)
P(B ∩ C A) =
=
P(A)
P(A)
P(C ∩ A)
= P(B|A)
= P(B|A)P(C|A) .
P(A)
2. (2 points) Let X = (Xn )n≥0 be a stochastic process with the state space S satisfying the
following Markov property:
P(Xn+1 = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 ) = P(Xn+1 = j | Xn = i)
(1)
for each n ≥ 0 i and all i0 , . . . in−1 , i, j ∈ S.This means that the (immediate) future conditional
on the past and present is the same as the (immediate) future conditional on the present (that
is, if we know the present we can forget the past). Prove that (1) is true if and only if
P(Xn+1 = j, Xn−1 = in−1 , . . . , X0 = i0 | Xn = i)
= P(Xn+1 = j | Xn = i)P(Xn−1 = in−1 , . . . , X0 = i0 | Xn = i) ,
(2)
that is, conditional on the present, the (immediate) future and the past are independent.
Solution: Let A = {Xn = i}, B = {Xn+1 = j} and C = {Xn−1 = in−1 , . . . , X0 = i0 }. Now
apply Problem 1.
3. (2 points) Let B1 , B2 , . . . be disjoint events with ∪∞
n=1 Bn = Ω. Show that if A is another
event and P(A|Bn ) = p for all n then P(A) = p.
Deduce that if X and Y are discrete random variables then the following are equivalent:
(a) X and Y are independent; (b) the conditional distribution of X given Y = y is independent
of y.
Solution:
∞
∞
∞
X
X
X
P(A) =
P(A ∩ Bn ) =
P(Bn )P(A|Bn ) = p
P(Bn ) = p .
n=1
n=1
n=1
1
For the second part recall that X and Y are independent if P(X = x, Y = y) = P(X =
x)P(Y = y) for all x, y. Suppose that X and Y are independent. Then
P(X = x|Y = y) =
P(X = x, Y = y)
= P(X = x)
P(Y = y)
which is independent of y. Conversely, let p := P(X = x|Y = y) independent of y. By the
first part, P(X = x) = p = P(X = x|Y = y). Hence P(X = x, Y = y) = P(X = x|Y =
y)P(Y = y) = P (X = x)P(Y = y).
4. (3 points) Let Y = (Yn )n≥1 be a sequence of i.i.d. (independent and identically distributed)
random variables taking values in some set E (e.g. E = [0, 1], E = R, E = Rd , E = R∞ ...),
let f : S × E → S be a function, and let X0 be a random variable in S independent of the
sequence Y . For n ≥ 1 define
Xn = f (Xn−1 , Yn ) .
(3)
Show that X = (Xn )n≥0 is a Markov chain and express its transition matrix P in terms of
f . Can all Markov chains be realized in this way? How would you simulate a Markov chain
using a computer?
Solution: We calculate
P(Xn+1 = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 )
= P(f (Xn , Yn+1 ) = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 )
= P(f (i, Yn+1 ) = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 )
= P(f (i, Yn+1 ) = j) ,
where the last row follows by independence of Yn+1 and the family X0 , X1 , . . . , Xn (the latter
random variables are functions of random variables Y1 , . . . , Yn which are by the assumption
independent of Yn+1 ). It follows that X is a Markov chain with transition probabilities
pij = P(f (i, Y1 ) = j).
In order to simulate the Markov chain we proceed as follows: Let (Ω, F, P) be any probability space that supports a sequence (Un )n≥0 of i.i.d. random variables uniformly distributed
on [0, 1]. Suppose further that the state space is (countably) infinite and WLOG assume
S = {1, 2, . . . }. The case of a finite state space is analogous.
We first define X0 . Let g : [0, 1] → S be a function defined by
g(u) =
∞
X
k 1(Pk−1 λj , Pk
j=1
j=1
u ∈ [0, 1] ,
λj ] (u) ,
k=1
(convention
Thus,
P0
j=1
P
Pk
= 0). Set X0 = g(U0 ); then X0 = k if and only if U0 ∈ ( k−1
j=1 λj ,
j=1 λj ].
k−1
k
k
k−1
X
X
X
X
P(X0 = k) = P(
λj < U0 ≤
λj ) =
λj −
λj = λk ,
j=1
j=1
j=1
j=1
meaning that the distribution of X0 is given λ.
Now we preceed to Xn , n ≥ 1. Let f : S × [0, 1] → S be defined as
f (i, u) =
∞
X
k 1(Pk−1 pij , Pk
j=1
j=1
k=1
2
pij ] (u) ,
i ∈ S, u ∈ [0, 1] .
(4)
Then f (i, u) = k if and only if
Pk−1
j=1
pij < u ≤
Pk
j=1
pij . Set
n ≥ 1.
P
Pk
= i, then Xn = k if and only if Un ∈ ( k−1
j=1 pij ,
j=1 pij ]. Therefore, for
Xn = f (Xn−1 , Un ) ,
Note that if Xn−1
n≥1
P(Xn = k | Xn−1 = i) = P(
= P(
k−1
X
pij < Un ≤
k
X
j=1
j=1
k−1
X
k
X
pij < Un ≤
j=1
j=1
pij | Xn−1 = i)
pij ) =
k
X
j=1
pij −
k−1
X
pij
j=1
= pik .
This shows that X = (Xn : n ≥ 0) (λ, P )-Markov chain.
Not every Markov chain can be realized by formula (3). There is a difference between
realizing a Markov chain on a given probability space by (3) and simulating the Markov
chain as described above. Take, for example, a random walk which is naturally defined as
Xn = f (Xn−1 , Yn ) with f (x, y) = x + y. Note that the function f here is quite natural as
opposed to f defined in (4). For example, a birth-and-date Markov chain cannot be written
as in (3) in a natural way.
5. (4 points) Suppose that Y1 , Y2 , . . . are independent, identically distributed random variables such that P(Yi = 1) = p ∈ (0, 1) and P(Yi = 0) = 1 − p. Set S0 = 0, Sn = Y1 + · · · + Yn .
In each of the following cases determine whether (Xn )≥0 is a Markov chain: (a) Xn = Yn ,
(b) Xn = Sn , (c) Xn = S0 + S1 + · · · + Sn , (d) Xn = (Sn , S0 + S1 + · · · + Sn ). In the cases
where (Xn )≥0 is a Markov chain find its state-space and transition matrix, and in the cases
where it is not a Markov chain give an example where P(Xn+1 = i|Xn = j, Xn−1 = k) is not
independent of k.
Solution: (a) Set f (x, y) = y. Then Xn = f (Xn−1 , Yn ), so it is a Markov chain by Problem 4.
(b) Set f (x, y) = x + y. Then Xn = f (Xn−1 , Yn ), and again, it is a Markov chain by Problem
4. (c) This is not a Markov chain. For example, by noting that Xn = nY1 + · · · 2Yn−1 + Yn ,
we have that
P(X4 = 6|X3 = 3, X2 = 2, X1 = 1)
= P(4Y1 + 3Y2 + 2Y3 + Y4 = 6|Y1 = 1, Y2 = 0, Y3 = 0)
= P(4 + Y4 = 6|Y1 = 1, Y2 = 0, Y3 = 0) = P(Y4 = 2) = 0 .
On the other hand,
P(X4 = 6|X3 = 3, X2 = 1, X1 = 0)
= P(4Y1 + 3Y2 + 2Y3 + Y4 = 6|Y1 = 0, Y2 = 1, Y3 = 1)
= P(5 + Y4 = 6|Y1 = 0, Y2 = 1, Y3 = 1) = P(Y4 = 1) = p .
(d) The state space of Xn is Z+ × Z+ . Let Xn = (Xn0 , Xn00 ) where Xn0 = Sn and Xn00 = S0 + S1 +
· · ·+Sn , and let f : Z+ ×Z+ ×{0, 1} → Z+ ×Z+ be defined by f (x0 , x00 , y) = (x0 +y, x0 +x00 +y).
0
00
Then (Xn0 , Xn00 ) = f (Xn−1
, Xn−1
, Yn ), hence (Xn )n≥0 is a Markov chain.
6. (4 points) A flea hops about at random on the vertices of a triangle, with all jumps equally
likely. Find the probability that after n hops the flea is back where it started.
3
A second flea also hops about on the vertices of a triangle, but this flea is twice as likely
to jump clockwise as anticlockwise. What is the probability that after n hops this second flea
is back where it started?
Solution: In the first case the transition matrix is equal to
 1 1
0 2 2

P = 12 0 21  .
1
1
0
2
2
The eigenvalues of P are equal to 1, −1/2, −1/2. Thus we search for the n-step transition
probabilities in the form
n
1
(n)
.
p11 = a + (b + cn) −
2
To find a, b, c we use that
(0)
1 = p11 = a + b
1
0=
= a + (b + c) −
2
1
1
(2)
= p11 = a + (b + 2c)
.
2
4
n
(n)
The solution is a = 31 , b = 23 , c = 0, so that p11 = 13 + 23 − 21 .
In the second case the transition matrix is
 2 1
0 3 3

P = 31 0 23 
2
1
0
3
3
(1)
p11
The eigenvalues are 1, − 21 + i
lities in the form
(n)
p11
√
3
, − 12
6
√
−i
3
.
6
Thus we search for the n-step transition probabi√ !n
√ !n
3
1
3
1
+c − −i
.
=a+b − +i
2
6
2
6
To find a, b, c we use that
(0)
1 = p11 = a + b + c
√ !
√ !
1
3
1
3
0=
=a+b − +i
+c − −i
2
6
2
6
!
√ 2
√ !2
1
1
4
3
3
(2)
= p11 = a + b − + i
+c − −i
.
9
2
6
2
6
(1)
p11
The solution is a = b = c = 13 , so that
(n)
p11
1
=
3
1
3
1
=
3
=
√ !n
1
3
+
1+ − +i
2
6
n
2
1
5πn
√
+
cos
3
6
3
n
2
1
πn
+
−√
.
cos
3
6
3
4
√ !n !
1
3
− −i
2
6
7. (3 points) Show that every transition matrix on a finite state-space has at least one closed
communicating class. Find an example of a transition matrix with no closed communicating
class.
Solution: Since the state-space is finite, there are finitely many communicating classes, say
C1 , C2 , . . . , Cn . Assume that none of them is closed. Let l1 = 1. Since C1 = Cl1 is not closed,
there exists i1 ∈ C1 and j2 ∈ Cl2 , l2 6= 1, such that i1 −→ j2 (that is, the chain escapes from C1
to some other class Cl2 ). Since Cl2 is not closed there exists i2 ∈ Cl2 and j3 ∈ Cl3 , l3 6= i2 , such
that i2 −→ j3 . By continuing this process we obtain a sequence (Clk )k≥1 of communicating
classes and two sequences of states, (ik )k≥1 and (jk )k≥2 such that ik ∈ Cik , jk ∈ Cik and
ik −→ jk+1 , k ≥ 1. Since the number of communicating classes is finite there must be at
least one class which appears at least twice in the sequence. Without loss of generality we
assume that the class C1 appears twice. Since two elements in the same communicating class
obviously communicate, we obtain that
i1 −→ j2 −→ i2 −→ j3 −→ · · · −→ jm ∈ C1 .
Since jm −→ i1 (they are both in the same communicating class), we close the circle. This
means that all states in the above sequence communicate. This is a contradiction with the
fact that at least Ci2 is different from C1 .
An example of a Markov chain with no closed communicating class is the deterministic
motion to the right on Z+ .
8. A random walker moves along the graph on the picture. When the walker is at a vertex,
with equal probabilities he moves to any of the adjacent vertices.
4r
r3
@
@
@
@5r
@
@
@
1
r
@
@r
2
(a) (1 point) Write down the transition matrix of the corresponding Markov chain.
(b) (2 points) The walker starts at vertex 1. Find the probability that he hits vertex 2
before vertex 5.
(c) (2 points) The walker starts at vertex 1. Compute the expected number of steps until
he arrives at vertex 3.
Solution:
(a)


0 1/3 0 1/3 1/3
1/3 0 1/3 0 1/3



0
1/3
0
1/3
1/3
P =


1/3 0 1/3 0 1/3
1/4 1/4 1/4 1/4 0
5
(b) Let Ti = min{n ≥ 0 : Xn = i} and hi = Pi (T2 < T5 ), i ∈ S (S is the set of vertices).
Then h2 = 1, h5 = 0, and the first step analysis gives the linear system
1 1
+ h4
3 3
1 1
=
+ h4
3 3
1
1
=
h1 + h3 .
3
3
h1 =
h3
h4
The unique solution is h1 = h3 = 37 , h4 = 27 .
(c) Let gi = E i (T3 ). Clearly, g3 = 0. The first step gives the linear system
1
g2 +
3
1
=
g1 +
3
1
=
g1 +
3
1
=
g1 +
4
g1 =
g2
g4
g5
The unique solution is g1 =
16
,
3
g2 = g4 =
1
1
g4 + g5 + 1
3
3
1
g5 + 1
3
1
g5 + 1
3
1
1
g2 + g4 + 1 .
4
4
64
,
15
g5 =
67
.
15
9. (2 points) Let 0 < p = 1 − q < 1. Show that the general solution of the recurrence relation
h0 = 1
hi = phi+1 + qhi−1 ,
i ≥ 1,
i
is given by hi = A + B pq when p 6= q, and hi = A + Bi when p = q.
Solution: See the textbook, Appendix 1.11.
10. (2 points) Let (Xn )n≥0 be a Markov chain on 0, 1, . . . with transition probabilities given
by
2
i+1
p01 = 1, pi,i+1 + pi,i−1 = 1, pi,i+1
pi,i−1 , i ≥ 1.
i
Show that if X0 = 0 then the probability that Xn ≥ 1 for all n ≥ 1 is π62 .
Solution: (Xn )n≥0 is almost a birth-and-death chain, the only difference being that 0 is not an
absorbing state, but rather reflecting. By using notation from Example 1.3.4. let pi = pi,i+1 ,
qi = pi,i−1 , so that
2
i+1
pi =
qi .
i
With notation from Example 3.4.1, γ0 = 1, and
γi =
qi qi−1 . . . q1
1
=
,
pi pi−1 . . . p1
(i + 1)2
6
i ≥ 1.
Note that
P0 (Xn ≥ 1 for all n ≥ 1) = P1 (T0 = ∞) = 1 − P1 (T0 < ∞)
!−1
P∞
∞
X
γ
γ
1
6
0
j=1 j
= 1 − P∞
= P∞
=
=
.
2
2
j
π
j=0 γj
j=0 γj
j=1
7
Related documents