Download 2.8 Fourier Transformation and Dirac Delta Function

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
2 Review of probablity theory
2.8 Fourier Transformation and Dirac Delta Function
In the definition of the charactetistic function above
©(k) =
ˆ
d xe i kx p(x)
we used something that is called a Fourier transform. Given a function f (x) with a set of properties that aren’t important here, the Fourier transform is defined as
f˜(k) =
ˆ
d xe i kx f (x).
Thus the characteristic function is the Fourier transform of the probability density function
f (x). Given the Fourier transform f˜(k) one can compute the original function by the inverse
transform
ˆ
1
d ke °i kx f˜(k).
f (x) =
2º
Note the factor of 1/2º. This means if we plug in f˜(k) then
1
2º
ˆ
ˆ
d ke
d ye i k y f (y)
ˆ
ˆ
∑
∏
1
=
dy
d k e i k(y°x) f (y).
2º
f (x) =
Now lets look at the integral
°i kx
1
d ke i k(x°y) = ±(x ° y)
2º
If we let x = y then this integral is divergent (infinity). If x 6= y we have
ˆ
1
2º
ˆ
d ke i k(x°y) =
1
2º
ˆ
d k cos(k(x ° y)) +
i
2º
ˆ
d k sin(k(x ° y))
These are both integrals over oscillatory functions the integral of which amounts to zero. So it
seems that this function
(
0
for x 6= 0
±(x) =
1
for x = 0
and
±(x) =
1
2º
ˆ
d ke i kx .
Athe the same time we have
ˆ
i
1
1 h ikA
d x±(x) =
dk
e
° e °i k A
2º
ik
°A
ˆ
1
=
d k sin k A/k
º
= 1
ˆ
A
So we have a function that
37
2 Review of probablity theory
• zero almost everywhere
• infinitely large at the origin
• and the “area” under the curve is 1.
This is called the Dirac delta function. Inserted in the above equation it means
f (x) =
ˆ
d y±(x ° y) f (y).
This then makes sense. The ±(x ° y) “pulls out” the functional value x from all the values y.
The applications of the delta function are numerous. Looking at some of the earlier things we
discussed this becomes apparent. Recall that we originally defined the expectation value of a
random number as
N
1 X
hX i = lim
Xn
N !1 N n=1
and that this was identical to
hX i =
ˆ
d x x p(x)
Let’s also recall that the expecation value of a function was
≠
Æ
f (X ) =
=
N
1 X
f (X n )
N !1 N n=1
lim
ˆ
d x f (x)p(x).
Ha! but that means that the expectation value of
h±(z ° X )i =
N
1 X
±(z ° X n )
N !1 N n=1
lim
=
ˆ
=
p(z)
d x±(z ° x)p(x)
which means that the expectation value is equal to the pdf itself. This is pretty cool.
Transforming Random variables
An important question is if say we have a random variable X that has a certain probability density function p(x) what is the pdf for a tranformed random variable
Y = g (X ).
38
2 Review of probablity theory
So in other words we have a sequence of random numbers, plug them into a function g and ask
what it the pdf for the transformed random variable. Well, according to the above, we have
p Y (y) =
=
=
=
≠
≠
±(y ° Y )
Æ
±(y ° g (X ))
Æ
N
1 X
±(y ° g (X n ))
N !1 N n=1
lim
ˆ
d x ±(y ° g (x))p X (x)
Now lets assume that the tranformation g has an inverse function. Let’s call that function h, so
that h(g (x)) = x. Or h = g °1 . Now we make a change of variables in the above integral by saying
that x = h(z). Then
Ø
Ø
Ø d h(z) Ø
Ø
Ødx
dx = Ø
dz Ø
and the integral becomes
ˆ
d x ±(y ° g (x))p X (x) =
ˆ
Ø
Ø
Ø d h(z) Ø
Ø
Ø ±(y ° z)p X (h(z))
dz Ø
dz Ø
Using the action of the delta function we now see that
Ø
Ø
Ø d h(y) Ø
Ø
Ø p X (h(z)).
p Y (y) = Ø
dy Ø
So the steps that we have to do in order to obtain p Y (y).
• Find the inverse of the transform h
• Compute the Jakobian
• Plug everything in.
Let’s look at an example. Let’s say that p X (x) = e °x on the intervale x 2 [0, 1]. Let’s say plug in
these exponentially distributed numbers into the function
g (X ) =
1
X2
The inverse of the function g is the function
h(y) =
The Jakobian of that function is
p
y
Ø
Ø
Ø d h(y) Ø 1 1
Ø
Ø
Ø d y Ø = 2 y 1/2 .
39
2 Review of probablity theory
So that
1 1 °p y
e
.
2 y 1/2
p Y (y) =
The simplest transform is that of multiplying a random number with a constant, say A so
Y = AX = g (X )
Then
h(Y ) =
1
Y
A
So then, given the pdf p X (x) for X we have
1
p X (y/A)
A
p Y (y) =
So for instance if
then
2
1
p(x) = p e °x /2
2º
2
2
1
p(y) = p
e °y /2A
2
2ºA
What happens to the characteristic function when multiplying a random number by a constant
©Y (k) =
ˆ
d ye i k y p Y (y)
1
=
d ye i k y p X (y/A)
A
ˆ
1
=
d x Ae i k Ax p X (x)
A
= © X (Ak)
ˆ
Let’s keep this in mind.
Significance of Gaussian Distribution - The Central Limit Theorem
One of the most fundamental ideas of random processes is additing independent identically
distributed random numbers. So let’s assume that we have a sum
YN =
N
X
n=1
Xn
where each summand X n is drawn independently from a distribution p X (x). For simplicity we
assume that this pdf is symmetric, so that
p X (x) = p X (°x).
40
2 Review of probablity theory
In this case
hX i =
ˆ
=
ˆ
d xxp(x)
0
d xxp(x) +
°1
ˆ °1
= °
= °
= 0
ˆ0 1
0
hY N i =
≠
Y N2
Æ
=
=
1
d xxp(x)
0
d xxp(x) + ...
xp(°x) + ....
In this case the mean of the sum
The second moment is
ˆ
N
X
n=1
hX n i = 0
N X
N
X
hX n X m i
n=1 m=1
N ≠
N X
N
X
Æ X
hX n X m i
X n2 +
n=1
n=1 m6=n
=
N æ2 +
=
N æ2
N X
N
X
n=1 m6=n
hX n i hX m i
So that the typical deviation from zero of the sum is
q≠
p
Æ
Y N2 = æ N .
The question is now,pwhat is the probability distribution of Yn ? We now that because the variance increases with N as N ! 1 the pdf for Y N with get broader and broader. But how about
the random variable
Yn
Z=p .
N
Let’s see, we have
so
N
1 X
Z=p
Xn .
N n
*
N
1 X
p Z (z) = h±(z ° Z )i = ±(z ° p
Xn )
N n
41
+
2 Review of probablity theory
or explicitely
p Z (z) =
ˆ
N
1 X
dx 1 ...dx N ±(z ° p
X n ) p X (x 1 )...p X (x 1 )
N n
Now we make use of the utterly useful identity
1
±(x) =
2º
ˆ
dk e °i kx
and obtain
p Z (z) =
=
so that means that
ˆ
∑ˆ
∏N
p
1
°i kz
i kx/ N
dke
dx e
p X (x)
2º
ˆ
h
p iN
1
dke °i k y ¡ X (k/ N )
2º
h
p iN
¡ Z (k) = ¡ X (k/ N )
Now let’s series expand the characteristic function of X
µ
∂
µ
∂
p
k
1 k 2 ≠ 2Æ
1
© X (k/ N ) º 1 + i p hX i °
X +O
p
2
N2
N
N
so because all the odd moments vanish we have
p
© X (k/ N )N
∑
µ
∂
µ
∂∏N
1 k 2 ≠ 2Æ
1
º
1°
X +O
p
2
N2
N
∑
µ
∂∏
N
1 k2 2
1
º
1°
æ +O
2N
N2
= e °k
2
æ2 /2
= © Z (k)
Well. We have seen this characteristic function before. It means that
2
2
1
p Z (z) = p
e °z /2æ
2ºæ2
This means that
pregardless of the properties of the pdf for the summands X n when we sum them
and devide by N then the sum is going to be normally distributed.
42
Related documents