* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Download a pdf file - Department of Mathematics and Computer Science
Symmetric cone wikipedia , lookup
Capelli's identity wikipedia , lookup
Rotation matrix wikipedia , lookup
Exterior algebra wikipedia , lookup
Covariance and contravariance of vectors wikipedia , lookup
Vector space wikipedia , lookup
Non-negative matrix factorization wikipedia , lookup
System of linear equations wikipedia , lookup
Matrix (mathematics) wikipedia , lookup
Singular-value decomposition wikipedia , lookup
Principal component analysis wikipedia , lookup
Determinant wikipedia , lookup
Orthogonal matrix wikipedia , lookup
Gaussian elimination wikipedia , lookup
Four-vector wikipedia , lookup
Jordan normal form wikipedia , lookup
Matrix multiplication wikipedia , lookup
Perron–Frobenius theorem wikipedia , lookup
Cayley–Hamilton theorem wikipedia , lookup
Invertibility, Determinants and Eigenvalues of 2-by-2
Matrices with Entries from a Finite Field
Wu, Daiyi
Department of Mathematics and Computer Science
The Citadel, Charleston S.C, 29409
Wud1@citadel.edu
1. Introduction
In this paper, I investigate the nature of linear algebra results when the field of real
numbers is replaced by a finite field. We will write GF(q) (known as the Galois Field), or
more simply just Gq to denote the finite field with q elements. Necessarily of course, q is
a power of a prime. When p is prime, Z p and Gq denote the same field. However, if n is
not prime, then Z n is just a ring.
We use the notation M n (R) to denote the set of n-by-n matrices with integers from
a ring R which is always assumed to have the usual matrix addition and multiplication but
using the operation defined on the set R. For example, M 2 (G5 ) is the set of 2-by-2
matrices with entries from G5 .
2. Matrix Theory
As in general linear algebra, we define a function det, the determinant of any square
matrix. However, the det(A), for any square matrix A, is not a real number, it is an
element of the ring R. More formally, if A is in M n (R) , we define det(A) inductively as
follows. For any 1-by-1 matrix, say A = ( ) , we define the determinant of A to be . If
n 2 and A is an n n matrix, we write det ( Aij ) to mean the determinant of the
(n 1) (n 1) matrix formed by deleting row i and column j from A. Proceeding
inductively, for n 2 , the determinant of a matrix A is the sum of n terms of the
form a1 j det A1 j , with plus and minus sign alternating, where the entries a11 , a12 ..., a1n
are from the first row of A. In symbols,
n
det A a11 det A11 a12 det A12 ... (1)1 n a1n det A1n (1)1 j a1 j det A1 j
j 1
a b
ad bc in the case that n=2. In the case that n=3, we have:
For example, det
c d
1
a b
det d e
g h
c
b
f a det
e
i
c
d
b det
f
g
f
d
c det
i
g
c(dh eg )
e
a(bf ce) b(dc gf )
h
.
Those are the usual rules for expressing a determinant. But again, in this paper, ad-bc is a
ring element.
Definition: A matrix in M n (G q ) is invertible iff there is a matrix B M n (Gq ) , so that
A B B A I n where I n is the familiar identity matrix with the unity element (1) of
Gq on the main diagonal and 0 elsewhere.
a b
is invertible iff
In the usual matrix theory, we have the theorem that a matrix A=
c d
ad-bc 0. If ad-bc = 0, then A is not invertible. The usual proof works:
Suppose that A M 2 (G p ). If ad-bc 0, it is invertible in G p . Thus,
d
a b ad bc
AB =
c d c
ad bc
d
B ad bc
c
ad bc
b
ad bc 1 0
a 0 1
ad bc
where
b
d b
1
ad bc =
a
det( A) c a
ad bc
The Lemma and the previous comments, prove the following:
Lemma: In a ring R with unity u, if an element has a multiplication inverse, it is unique.
Proof:
Suppose t is an invertible element in a ring R with unity u. We assume that two
elements are inverses of t.
Let t t 1 t y , so
t 1 (t t 1 ) t 1 (t y )
(t 1 t ) t 1 (t 1 t ) y
u t 1 u y
t 1 y
Theorem: Let R be a communicative ring with unity and suppose ad-bc is a unit in R.
2
a b
in M 2 ( R) . Then A is invertible and
Suppose matrix A=
c d
1 d b
.
B A 1
ad bc c a
3. Scalars and Eigenvalues
In linear algebra, one may multiply a matrix A (aij ) in M n (R) by a scalar
from a field so that A ( aij ) . Thus, every entry in A is multiplied (in the ring R)
by , and we do exactly this in the paper. In general, will be in Gq , and if q p t for
some t, we may restrict scalar to some subfield of Gq . Later in this paper, the scalar is
an eigenvalue.
Definition: An eigenvector of a n n matrix A is a nonzero vector v such that Av v
for some scalar . The scalar is called an eigenvalue of A if there is a nontrivial solution
v of Av v . Such a v is called an eigenvector corresponding to .
v
2
Let be a vector in V G p G p G p .
u
1 2
in 5 . In order to find its
Main Example: For our main example, let M
3 4
eigenvalue , we solve the characteristic equation det( M I ) 0.
(1 )(4 ) 6 4 5 2 6 2 5 2 2 2 .
The equation 2 2 0 is irreducible in the field 5 .
We add
2 to the field, and now we have 25 elements in our new field, 5 ( 2 ) =
GF (25). The elements of 5 ( ) , where 2 2 are { b c : b, c 5 }.
The vector ( x, y ) in GF (25) 2 is an eigenvector if it solves the equation:
1 2 x
x
where 2 2 .
3 4 y
y
There are 25 elements which we solved for by inspection:
3
x 0 1 2 3 4
,
,
,
,
y 0 3 2 4 4 1 2 3
1 2 3 4
,
,
,
,
2 1 3 3 2 4 4
2 2 1 2 2 2 3 2 4
,
,
,
,
4 2 2 4 1 3 3
3 3 1 3 2 3 3 3 4
,
,
,
,
3 4 2 2 4 3 1
4 4 1 4 2 4 3 4 4
,
,
,
,
3 4 1 4 3 2 2
In all, I got 25 eigenvectors. They are all different vectors. Top entries (first components)
contain all 25 elements and so do bottom entries. They match perfectly without repetition.
Besides, when I add any two of eigenvectors together, the result will still be one of the 25
possible eigenvectors. For example:
1 3 4 4
3 2 4 1 7 3 2 3
2 3 5
4 2 2 6 2 1
3 4 3 7 3 2 3
3 2 3 3 3 3
If we multiply either one of those 25 eigenvectors by scalars (in 5 ( ) ), the result will be
one of 25 eigenvectors.
2 1 4 2 4 2
2
2 4 4 8 4 3
3 3 9 3 4
3
2 3 6 3 1
3 3 2 6 1
2
3 3 2 3 2 3
By the definition of vector space, [DL 191] we know that our nonempty set V, with
scalars from GF(9), is a vector space. Also, a subspace of a vector space V is a subset H
of V such that H is itself a vector space under the same operations of addition and scalar
4
multiplication that are already defined on V.
Theorem: A subset H of a vector space V is a subspace of V if and only if the following
conditions are all satisfied:
a. The zero vector of V is in H.
b. If u and v are in H, then u+v is in H.
c. If u is in H and c is any scalar, then cu is in H.
By the definition and theorem above, these 25 eigenvectors are a subspace of GF (25) 2 ,
which has 625 elements (25 25) . It is actually the null space of A-I.
In linear algebra, the set of eigenvectors corresponding to an eigenvalue is a vector
subspace of V. The same is true here.
1 2
in 5 is , so we solve the equation for
The other eigenvalue of M
3 4
eigenvectors.
1 2 x
x
x
4
3 4 y
y
y
where 2 2
There are 25 elements which we found:
x 0 1 2 3 4
,
,
,
,
y 0 2 2 4 4 1 3 3
1 2 3 4
,
,
,
,
2 4 4 1 3 3 2
2 2 1 2 2 2 3 2 4
,
,
,
,
4 3 3 2 4 2 1
3 3 1 3 2 3 3 3 4
,
,
,
,
2 3 4 1 2 3 4
4 4 1 4 2 4 3 4 4
,
,
,
,
3 1 3 2 2 4 4
By adding two of eigenvectors together, the result will still be one of 25 possible
eigenvectors. For example:
5
4 1 5
3 3 4 1 7 4 2 4
4 4 3 5 7 2
4 3 4 4 4 4
2 4 3 5 5 0
3 4 2 5 5 0
If we multiply either one of those 25 eigenvectors by scalars (in 5 ( ) ), the result will be
one of 25 eigenvectors.
2 1 4 2
2
2
3 3 6 9 4
( 1)
2 3 5 7 2
3 3 2 6 1
2
2 2 2 2 2 2
0 4
in 5 we get
Another example, similar to the previous one, with M
2
0
2
2
( ) 8 0 3 0 .
2
Thus, 3 is the characteristic equation and it is irreducible in 5 . We expand to
5 ( ) where 2 3 . The eigenvectors are:
x 0 1 2 3 4
, , , ,
y 0 4 3 2
1 2 3 4
,
,
,
,
2
4
2
3
2
2
2
2
2 2 1 2 2 2 3 2 4
,
,
,
,
4 4 4 4 3 4 2 4
3 3 1 3 2 3 3 3 4
,
,
,
,
1
1
4
1
3
1
2
1
4 4 1 4 2 4 3 4 4
,
,
,
,
3 3 4 3 3 3 2 3
In this example, I got a different set of eigenvectors from the previous example.
However, there are still 25 eigenvectors without elements repetition either on top or
6
bottom entries.
When I add two of eigenvectors together, the result will still be one of the 25
possible eigenvectors. Theses eigenvectors are again closed under vector addition and
scalar multiplication as was checked.
1 2 3 3
4 3 7 2
2 2 4 3 6 3 1
3
2
4
4
6
4
1
3 4 4 4 7 8 2 3
1 3 2 4 2 4
If we multiply either one of those 25 eigenvectors by scalars (in 5 ( ) ), the result
will be one of 25 eigenvectors.
2 8 3
4
3 1 2 2
2 2 2 4 4 1
2
2
3 2 6 4 4 3
4 4 1 2 1 2 2 2
3
3 3 9 3 4
The other eigenvalue is , and we solve for its eigenvectors
1 2 x
x
x
4
3 4 y
y
y
where 2 3
There are 25 elements which we found:
x 0 1 2 3 4
,
,
,
,
y 0 2 2 4 4 1 3 3
1 2 3 4
,
,
,
,
2 4 4 1 3 3 2
2 2 1 2 2 2 3 2 4
,
,
,
,
4 3 3 2 4 2 1
3 3 1 3 2 3 3 3 4
,
,
,
,
2 3 4 1 2 3 4
4 4 1 4 2 4 3 4 4
,
,
,
,
.
3 1 3 2 4 2 4
7
4. Some Different Examples
Now, we try a couple of a little bit different examples, and see what happens.
1 3
in 5 we get
Using M
2 2
(1 )( 2 ) 6 0 2 2 2 6 2 3 4 ( 4)( 1)
4, 1 4
This time, is actually an integer. Solving the characteristic equation we get
x 0 1 2 3 4
, , , ,
y 0 1 2 3 4
2
This also is a subspace of scalar V G p G p G p with p = 5.
0 4
. The calculation is:
The second example in this section: M
2 3
( )(3 ) 8 0 3 2 8 2 2 3 ( 1)( 3)
3 2, 1
The ’s are different integers.
0 4 x
x
2 . There are only 5 eigenvectors.
Solving for 2,
2 3 y
y
x 0 1 2 3 4
, , , ,
y 0 3 1 4 2
Solve
1,
0 4 x x
1
2 3 y y
x 0 1 2 3 4
, , , ,
y 0 4 3 2 1
1 4
.
Another example: M
0 3
(1 )(3 ) 0
3, 1
8
1 4 x x
3 and get eigenvectors:
For 3, we solve
0 3 y y
x 0 1 2 3 4
, , , , .
y 0 3 1 0 2
1 4 x x
1 and get eigenvectors
For 1, we solve
0 3 y y
x 0 1 2 3 4
, , , , .
y 0 0 0 0 0
5. Diagonalization
1 2
in 5 .
We return to the first example in the paper here, M
3 4
There are 25 eigenvectors for each of its two eigenvalues ,4 , and we pick one
vector (not zero) randomly from each of these two sets of eigenvalues.
(4 ) V j j 1
U i i251
U i 0,
25
Vj 0
However, we observe that U i V j 0
Therefore, P U i
for any i = 2,…,25,
j = 2,…,25.
V j cannot be orthogonal matrix. P T P 1
By the definition of diagonalization from a linear algebra book ([DL}, p.288-289),
for 2-by-2 matrix,
0
A22 [U i V j ] [ AU i AV j ] [1U i 2V j ] U i V j 1
0 2
Vj 1
0
0
P 1 AP 1
0 2
U
Vj
AU
D 1
0
0
2
i
1
i
0
2
9
A PDP 1
When
4
3
U
2
3
V
1
3
3
, D
p
2 1
0
So,
PDP 1
1 2 3
1 2 2
0
4
P 1
3
1 1
1 2 3
2 1
3
1 4 3 2 1 2
=A
3 4 2 3 1 3 2 4 1 3 4
Since the condition holds, we can conclude that
1 2
0 1
P
P
and P U i
3 4
0 4
25
(4 ) V j j 1 .
U i i251
V j ,
I also did a couple more examples to show if the matrix with entries in finite field is
diagonalization or not (not in this paper), and the results all come out nicely. One reason I
believe it is diagonalizable is because when P U i V j , PP 1 P 1 P I (since the
invertibility theory for matrix with entries in a finite field is the same as regular matrix
theory).
Therefore, the matrix with entries in a finite field and which has two eigenvalues is
diagonalizable, and one may also apply the formula A PDP 1
References
[AJ] Arthur Jones, Sidney A. Morris, Kenneth R. Pearson, Abstract Algebra and Famous
Impossibilities, 1994, Springer-Verlag New York, Inc. New York.
[DL] David C. lay, Linear Algebra, instructor’s edition, 1994, Addison-Wesley
Publishing Company, Inc. Maryland.
[JR] Joseph J. Rotman, Abstract Algebra With Applications, 3rd edition, 2006. Pearson
Education, Inc. New Jersey.
10