22.2. Eigendecompositions
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in SageMaker Studio Lab

Eigenvalues are often one of the most useful notions we will encounter when studying linear algebra, however, as a beginner, it is easy to overlook their importance. Below, we introduce eigendecomposition and try to convey some sense of just why it is so important.

Suppose that we have a matrix \(A\) with the following entries:

(22.2.1)\[\begin{split}\mathbf{A} = \begin{bmatrix} 2 & 0 \\ 0 & -1 \end{bmatrix}.\end{split}\]

If we apply \(A\) to any vector \(\mathbf{v} = [x, y]^\top\), we obtain a vector \(\mathbf{A}\mathbf{v} = [2x, -y]^\top\). This has an intuitive interpretation: stretch the vector to be twice as wide in the \(x\)-direction, and then flip it in the \(y\)-direction.

However, there are some vectors for which something remains unchanged. Namely \([1, 0]^\top\) gets sent to \([2, 0]^\top\) and \([0, 1]^\top\) gets sent to \([0, -1]^\top\). These vectors are still in the same line, and the only modification is that the matrix stretches them by a factor of \(2\) and \(-1\) respectively. We call such vectors eigenvectors and the factor they are stretched by eigenvalues.

In general, if we can find a number \(\lambda\) and a vector \(\mathbf{v}\) such that

(22.2.2)\[\mathbf{A}\mathbf{v} = \lambda \mathbf{v}.\]

We say that \(\mathbf{v}\) is an eigenvector for \(A\) and \(\lambda\) is an eigenvalue.

22.2.1. Finding Eigenvalues

Let’s figure out how to find them. By subtracting off the \(\lambda \mathbf{v}\) from both sides, and then factoring out the vector, we see the above is equivalent to:

(22.2.3)\[(\mathbf{A} - \lambda \mathbf{I})\mathbf{v} = 0.\]

For (22.2.3) to happen, we see that \((\mathbf{A} - \lambda \mathbf{I})\) must compress some direction down to zero, hence it is not invertible, and thus the determinant is zero. Thus, we can find the eigenvalues by finding for what \(\lambda\) is \(\det(\mathbf{A}-\lambda \mathbf{I}) = 0\). Once we find the eigenvalues, we can solve \(\mathbf{A}\mathbf{v} = \lambda \mathbf{v}\) to find the associated eigenvector(s).

22.2.1.1. An Example

Let’s see this with a more challenging matrix

(22.2.4)\[\begin{split}\mathbf{A} = \begin{bmatrix} 2 & 1\\ 2 & 3 \end{bmatrix}.\end{split}\]

If we consider \(\det(\mathbf{A}-\lambda \mathbf{I}) = 0\), we see this is equivalent to the polynomial equation \(0 = (2-\lambda)(3-\lambda)-2 = (4-\lambda)(1-\lambda)\). Thus, two eigenvalues are \(4\) and \(1\). To find the associated vectors, we then need to solve

(22.2.5)\[\begin{split}\begin{bmatrix} 2 & 1\\ 2 & 3 \end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}x \\ y\end{bmatrix} \; \textrm{and} \; \begin{bmatrix} 2 & 1\\ 2 & 3 \end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}4x \\ 4y\end{bmatrix} .\end{split}\]

We can solve this with the vectors \([1, -1]^\top\) and \([1, 2]^\top\) respectively.

We can check this in code using the built-in numpy.linalg.eig routine.

%matplotlib inline
import torch
from IPython import display
from d2l import torch as d2l

torch.linalg.eig(torch.tensor([[2, 1], [2, 3]], dtype=torch.float64))
torch.return_types.linalg_eig(
eigenvalues=tensor([1.+0.j, 4.+0.j], dtype=torch.complex128),
eigenvectors=tensor([[-0.7071+0.j, -0.4472+0.j],
        [ 0.7071+0.j, -0.8944+0.j]], dtype=torch.complex128))
%matplotlib inline
import numpy as np
from IPython import display
from d2l import mxnet as d2l

np.linalg.eig(np.array([[2, 1], [2, 3]]))
(array([1., 4.]),
 array([[-0.70710678, -0.4472136 ],
        [ 0.70710678, -0.89442719]]))
%matplotlib inline
import tensorflow as tf
from IPython import display
from d2l import tensorflow as d2l

tf.linalg.eig(tf.constant([[2, 1], [2, 3]], dtype=tf.float64))
(<tf.Tensor: shape=(2,), dtype=complex128, numpy=array([1.+0.j, 4.+0.j])>,
 <tf.Tensor: shape=(2, 2), dtype=complex128, numpy=
 array([[-0.70710678+0.j, -0.4472136 +0.j],
        [ 0.70710678+0.j, -0.89442719+0.j]])>)

Note that numpy normalizes the eigenvectors to be of length one, whereas we took ours to be of arbitrary length. Additionally, the choice of sign is arbitrary. However, the vectors computed are parallel to the ones we found by hand with the same eigenvalues.

22.2.2. Decomposing Matrices

Let’s continue the previous example one step further. Let

(22.2.6)\[\begin{split}\mathbf{W} = \begin{bmatrix} 1 & 1 \\ -1 & 2 \end{bmatrix},\end{split}\]

be the matrix where the columns are the eigenvectors of the matrix \(\mathbf{A}\). Let

(22.2.7)\[\begin{split}\boldsymbol{\Sigma} = \begin{bmatrix} 1 & 0 \\ 0 & 4 \end{bmatrix},\end{split}\]

be the matrix with the associated eigenvalues on the diagonal. Then the definition of eigenvalues and eigenvectors tells us that

(22.2.8)\[\mathbf{A}\mathbf{W} =\mathbf{W} \boldsymbol{\Sigma} .\]

The matrix \(W\) is invertible, so we may multiply both sides by \(W^{-1}\) on the right, we see that we may write

(22.2.9)\[\mathbf{A} = \mathbf{W} \boldsymbol{\Sigma} \mathbf{W}^{-1}.\]

In the next section we will see some nice consequences of this, but for now we need only know that such a decomposition will exist as long as we can find a full collection of linearly independent eigenvectors (so that \(W\) is invertible).

22.2.3. Operations on Eigendecompositions

One nice thing about eigendecompositions (22.2.9) is that we can write many operations we usually encounter cleanly in terms of the eigendecomposition. As a first example, consider:

(22.2.10)\[\mathbf{A}^n = \overbrace{\mathbf{A}\cdots \mathbf{A}}^{\textrm{$n$ times}} = \overbrace{(\mathbf{W}\boldsymbol{\Sigma} \mathbf{W}^{-1})\cdots(\mathbf{W}\boldsymbol{\Sigma} \mathbf{W}^{-1})}^{\textrm{$n$ times}} = \mathbf{W}\overbrace{\boldsymbol{\Sigma}\cdots\boldsymbol{\Sigma}}^{\textrm{$n$ times}}\mathbf{W}^{-1} = \mathbf{W}\boldsymbol{\Sigma}^n \mathbf{W}^{-1}.\]

This tells us that for any positive power of a matrix, the eigendecomposition is obtained by just raising the eigenvalues to the same power. The same can be shown for negative powers, so if we want to invert a matrix we need only consider

(22.2.11)\[\mathbf{A}^{-1} = \mathbf{W}\boldsymbol{\Sigma}^{-1} \mathbf{W}^{-1},\]

or in other words, just invert each eigenvalue. This will work as long as each eigenvalue is non-zero, so we see that invertible is the same as having no zero eigenvalues.

Indeed, additional work can show that if \(\lambda_1, \ldots, \lambda_n\) are the eigenvalues of a matrix, then the determinant of that matrix is

(22.2.12)\[\det(\mathbf{A}) = \lambda_1 \cdots \lambda_n,\]

or the product of all the eigenvalues. This makes sense intuitively because whatever stretching \(\mathbf{W}\) does, \(W^{-1}\) undoes it, so in the end the only stretching that happens is by multiplication by the diagonal matrix \(\boldsymbol{\Sigma}\), which stretches volumes by the product of the diagonal elements.

Finally, recall that the rank was the maximum number of linearly independent columns of your matrix. By examining the eigendecomposition closely, we can see that the rank is the same as the number of non-zero eigenvalues of \(\mathbf{A}\).

The examples could continue, but hopefully the point is clear: eigendecomposition can simplify many linear-algebraic computations and is a fundamental operation underlying many numerical algorithms and much of the analysis that we do in linear algebra.

22.2.4. Eigendecompositions of Symmetric Matrices

It is not always possible to find enough linearly independent eigenvectors for the above process to work. For instance the matrix

(22.2.13)\[\begin{split}\mathbf{A} = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix},\end{split}\]

has only a single eigenvector, namely \((1, 0)^\top\). To handle such matrices, we require more advanced techniques than we can cover (such as the Jordan Normal Form, or Singular Value Decomposition). We will often need to restrict our attention to those matrices where we can guarantee the existence of a full set of eigenvectors.

The most commonly encountered family are the symmetric matrices, which are those matrices where \(\mathbf{A} = \mathbf{A}^\top\). In this case, we may take \(W\) to be an orthogonal matrix—a matrix whose columns are all length one vectors that are at right angles to one another, where \(\mathbf{W}^\top = \mathbf{W}^{-1}\)—and all the eigenvalues will be real. Thus, in this special case, we can write (22.2.9) as

(22.2.14)\[\mathbf{A} = \mathbf{W}\boldsymbol{\Sigma}\mathbf{W}^\top .\]

22.2.5. Gershgorin Circle Theorem

Eigenvalues are often difficult to reason with intuitively. If presented an arbitrary matrix, there is little that can be said about what the eigenvalues are without computing them. There is, however, one theorem that can make it easy to approximate well if the largest values are on the diagonal.

Let \(\mathbf{A} = (a_{ij})\) be any square matrix (\(n\times n\)). We will define \(r_i = \sum_{j \neq i} |a_{ij}|\). Let \(\mathcal{D}_i\) represent the disc in the complex plane with center \(a_{ii}\) radius \(r_i\). Then, every eigenvalue of \(\mathbf{A}\) is contained in one of the \(\mathcal{D}_i\).

This can be a bit to unpack, so let’s look at an example. Consider the matrix:

(22.2.15)\[\begin{split}\mathbf{A} = \begin{bmatrix} 1.0 & 0.1 & 0.1 & 0.1 \\ 0.1 & 3.0 & 0.2 & 0.3 \\ 0.1 & 0.2 & 5.0 & 0.5 \\ 0.1 & 0.3 & 0.5 & 9.0 \end{bmatrix}.\end{split}\]

We have \(r_1 = 0.3\), \(r_2 = 0.6\), \(r_3 = 0.8\) and \(r_4 = 0.9\). The matrix is symmetric, so all eigenvalues are real. This means that all of our eigenvalues will be in one of the ranges of

(22.2.16)\[[a_{11}-r_1, a_{11}+r_1] = [0.7, 1.3],\]
(22.2.17)\[[a_{22}-r_2, a_{22}+r_2] = [2.4, 3.6],\]
(22.2.18)\[[a_{33}-r_3, a_{33}+r_3] = [4.2, 5.8],\]
(22.2.19)\[[a_{44}-r_4, a_{44}+r_4] = [8.1, 9.9].\]

Performing the numerical computation shows that the eigenvalues are approximately \(0.99\), \(2.97\), \(4.95\), \(9.08\), all comfortably inside the ranges provided.

A = torch.tensor([[1.0, 0.1, 0.1, 0.1],
              [0.1, 3.0, 0.2, 0.3],
              [0.1, 0.2, 5.0, 0.5],
              [0.1, 0.3, 0.5, 9.0]])

v, _ = torch.linalg.eig(A)
v
tensor([0.9923+0.j, 9.0803+0.j, 4.9539+0.j, 2.9734+0.j])
A = np.array([[1.0, 0.1, 0.1, 0.1],
              [0.1, 3.0, 0.2, 0.3],
              [0.1, 0.2, 5.0, 0.5],
              [0.1, 0.3, 0.5, 9.0]])

v, _ = np.linalg.eig(A)
v
array([9.08033648, 0.99228545, 4.95394089, 2.97343718])
A = tf.constant([[1.0, 0.1, 0.1, 0.1],
                [0.1, 3.0, 0.2, 0.3],
                [0.1, 0.2, 5.0, 0.5],
                [0.1, 0.3, 0.5, 9.0]])

v, _ = tf.linalg.eigh(A)
v
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([0.99228525, 2.9734395 , 4.953943  , 9.080336  ], dtype=float32)>

In this way, eigenvalues can be approximated, and the approximations will be fairly accurate in the case that the diagonal is significantly larger than all the other elements.

It is a small thing, but with a complex and subtle topic like eigendecomposition, it is good to get any intuitive grasp we can.

22.2.6. A Useful Application: The Growth of Iterated Maps

Now that we understand what eigenvectors are in principle, let’s see how they can be used to provide a deep understanding of a problem central to neural network behavior: proper weight initialization.

22.2.6.1. Eigenvectors as Long Term Behavior

The full mathematical investigation of the initialization of deep neural networks is beyond the scope of the text, but we can see a toy version here to understand how eigenvalues can help us see how these models work. As we know, neural networks operate by interspersing layers of linear transformations with non-linear operations. For simplicity here, we will assume that there is no non-linearity, and that the transformation is a single repeated matrix operation \(A\), so that the output of our model is

(22.2.20)\[\mathbf{v}_{out} = \mathbf{A}\cdot \mathbf{A}\cdots \mathbf{A} \mathbf{v}_{in} = \mathbf{A}^N \mathbf{v}_{in}.\]

When these models are initialized, \(A\) is taken to be a random matrix with Gaussian entries, so let’s make one of those. To be concrete, we start with a mean zero, variance one Gaussian distributed \(5 \times 5\) matrix.

torch.manual_seed(42)

k = 5
A = torch.randn(k, k, dtype=torch.float64)
A
tensor([[ 0.2996,  0.2424,  0.2832, -0.2329,  0.6712],
        [ 0.7818, -1.7903, -1.7484,  0.1735, -0.1182],
        [-1.7446, -0.4695,  0.4573,  0.5177, -0.2771],
        [-0.6641,  0.6551,  0.2616, -1.5265, -0.3311],
        [-0.6378,  0.1072,  0.7096,  0.3009, -0.2869]], dtype=torch.float64)
np.random.seed(8675309)

k = 5
A = np.random.randn(k, k)
A
array([[ 0.58902366,  0.73311856, -1.1621888 , -0.55681601, -0.77248843],
       [-0.16822143, -0.41650391, -1.37843129,  0.74925588,  0.17888446],
       [ 0.69401121, -1.9780535 , -0.83381434,  0.56437344,  0.31201299],
       [-0.87334496,  0.15601291, -0.38710108, -0.23920821,  0.88850104],
       [ 1.29385371, -0.76774106,  0.20131613,  0.91800842,  0.38974115]])
k = 5
A = tf.random.normal((k, k), dtype=tf.float64)
A
<tf.Tensor: shape=(5, 5), dtype=float64, numpy=
array([[-0.97812483,  1.20445062,  1.11533381, -0.18427915, -1.77590971],
       [-1.15848523,  1.17169049, -1.19847285,  0.55904217, -1.11710177],
       [ 0.55927053,  0.66065268, -1.0880367 , -0.24903121,  0.58888241],
       [ 0.4700517 ,  1.03649145, -0.55023977, -0.33531388, -0.80425649],
       [ 0.83811055,  0.16131817,  1.07698088,  1.71776595, -0.6111541 ]])>

22.2.6.2. Behavior on Random Data

For simplicity in our toy model, we will assume that the data vector we feed in \(\mathbf{v}_{in}\) is a random five dimensional Gaussian vector. Let’s think about what we want to have happen. For context, lets think of a generic ML problem, where we are trying to turn input data, like an image, into a prediction, like the probability the image is a picture of a cat. If repeated application of \(\mathbf{A}\) stretches a random vector out to be very long, then small changes in input will be amplified into large changes in output—tiny modifications of the input image would lead to vastly different predictions. This does not seem right!

On the flip side, if \(\mathbf{A}\) shrinks random vectors to be shorter, then after running through many layers, the vector will essentially shrink to nothing, and the output will not depend on the input. This is also clearly not right either!

We need to walk the narrow line between growth and decay to make sure that our output changes depending on our input, but not much!

Let’s see what happens when we repeatedly multiply our matrix \(\mathbf{A}\) against a random input vector, and keep track of the norm.

# Calculate the sequence of norms after repeatedly applying `A`
v_in = torch.randn(k, 1, dtype=torch.float64)

norm_list = [torch.norm(v_in).item()]
for i in range(1, 100):
    v_in = A @ v_in
    norm_list.append(torch.norm(v_in).item())

d2l.plot(torch.arange(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_39_0.svg
# Calculate the sequence of norms after repeatedly applying `A`
v_in = np.random.randn(k, 1)

norm_list = [np.linalg.norm(v_in)]
for i in range(1, 100):
    v_in = A.dot(v_in)
    norm_list.append(np.linalg.norm(v_in))

d2l.plot(np.arange(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_42_0.svg
# Calculate the sequence of norms after repeatedly applying `A`
v_in = tf.random.normal((k, 1), dtype=tf.float64)

norm_list = [tf.norm(v_in).numpy()]
for i in range(1, 100):
    v_in = tf.matmul(A, v_in)
    norm_list.append(tf.norm(v_in).numpy())

d2l.plot(tf.range(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_45_0.svg

The norm is growing uncontrollably! Indeed if we take the list of quotients, we will see a pattern.

# Compute the scaling factor of the norms
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i - 1])

d2l.plot(torch.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_51_0.svg
# Compute the scaling factor of the norms
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i - 1])

d2l.plot(np.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_54_0.svg
# Compute the scaling factor of the norms
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i - 1])

d2l.plot(tf.range(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_57_0.svg

If we look at the last portion of the above computation, we see that the random vector is stretched by a factor of 1.974459321485[...], where the portion at the end shifts a little, but the stretching factor is stable.

22.2.6.3. Relating Back to Eigenvectors

We have seen that eigenvectors and eigenvalues correspond to the amount something is stretched, but that was for specific vectors, and specific stretches. Let’s take a look at what they are for \(\mathbf{A}\). A bit of a caveat here: it turns out that to see them all, we will need to go to complex numbers. You can think of these as stretches and rotations. By taking the norm of the complex number (square root of the sums of squares of real and imaginary parts) we can measure that stretching factor. Let’s also sort them.

# Compute the eigenvalues
eigs = torch.linalg.eig(A).eigenvalues.tolist()
norm_eigs = [torch.abs(torch.tensor(x)) for x in eigs]
norm_eigs.sort()
print(f'norms of eigenvalues: {norm_eigs}')
norms of eigenvalues: [tensor(0.3490), tensor(1.1296), tensor(1.1296), tensor(1.1828), tensor(2.4532)]
# Compute the eigenvalues
eigs = np.linalg.eigvals(A).tolist()
norm_eigs = [np.absolute(x) for x in eigs]
norm_eigs.sort()
print(f'norms of eigenvalues: {norm_eigs}')
norms of eigenvalues: [0.8786205280381857, 1.2757952665062624, 1.4983381517710659, 1.4983381517710659, 1.974459321485074]
# Compute the eigenvalues
eigs = tf.linalg.eigh(A)[0].numpy().tolist()
norm_eigs = [tf.abs(tf.constant(x, dtype=tf.float64)) for x in eigs]
norm_eigs.sort()
print(f'norms of eigenvalues: {norm_eigs}')
norms of eigenvalues: [<tf.Tensor: shape=(), dtype=float64, numpy=0.30955595104947614>, <tf.Tensor: shape=(), dtype=float64, numpy=1.3937271988640285>, <tf.Tensor: shape=(), dtype=float64, numpy=2.0096259821132922>, <tf.Tensor: shape=(), dtype=float64, numpy=2.2404280282645432>, <tf.Tensor: shape=(), dtype=float64, numpy=3.1559123193686784>]

22.2.6.4. An Observation

We see something a bit unexpected happening here: that number we identified before for the long term stretching of our matrix \(\mathbf{A}\) applied to a random vector is exactly (accurate to thirteen decimal places!) the largest eigenvalue of \(\mathbf{A}\). This is clearly not a coincidence!

But, if we now think about what is happening geometrically, this starts to make sense. Consider a random vector. This random vector points a little in every direction, so in particular, it points at least a little bit in the same direction as the eigenvector of \(\mathbf{A}\) associated with the largest eigenvalue. This is so important that it is called the principle eigenvalue and principle eigenvector. After applying \(\mathbf{A}\), our random vector gets stretched in every possible direction, as is associated with every possible eigenvector, but it is stretched most of all in the direction associated with this principle eigenvector. What this means is that after apply in \(A\), our random vector is longer, and points in a direction closer to being aligned with the principle eigenvector. After applying the matrix many times, the alignment with the principle eigenvector becomes closer and closer until, for all practical purposes, our random vector has been transformed into the principle eigenvector! Indeed this algorithm is the basis for what is known as the power iteration for finding the largest eigenvalue and eigenvector of a matrix. For details see, for example, (Golub and Van Loan, 1996).

22.2.6.5. Fixing the Normalization

Now, from above discussions, we concluded that we do not want a random vector to be stretched or squished at all, we would like random vectors to stay about the same size throughout the entire process. To do so, we now rescale our matrix by this principle eigenvalue so that the largest eigenvalue is instead now just one. Let’s see what happens in this case.

# Rescale the matrix `A`
A /= norm_eigs[-1]

# Do the same experiment again
v_in = torch.randn(k, 1, dtype=torch.float64)

norm_list = [torch.norm(v_in).item()]
for i in range(1, 100):
    v_in = A @ v_in
    norm_list.append(torch.norm(v_in).item())

d2l.plot(torch.arange(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_75_0.svg
# Rescale the matrix `A`
A /= norm_eigs[-1]

# Do the same experiment again
v_in = np.random.randn(k, 1)

norm_list = [np.linalg.norm(v_in)]
for i in range(1, 100):
    v_in = A.dot(v_in)
    norm_list.append(np.linalg.norm(v_in))

d2l.plot(np.arange(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_78_0.svg
# Rescale the matrix `A`
A /= norm_eigs[-1]

# Do the same experiment again
v_in = tf.random.normal((k, 1), dtype=tf.float64)

norm_list = [tf.norm(v_in).numpy()]
for i in range(1, 100):
    v_in = tf.matmul(A, v_in)
    norm_list.append(tf.norm(v_in).numpy())

d2l.plot(tf.range(0, 100), norm_list, 'Iteration', 'Value')
../_images/output_eigendecomposition_ee2e00_81_0.svg

We can also plot the ratio between consecutive norms as before and see that indeed it stabilizes.

# Also plot the ratio
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i-1])

d2l.plot(torch.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_87_0.svg
# Also plot the ratio
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i-1])

d2l.plot(np.arange(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_90_0.svg
# Also plot the ratio
norm_ratio_list = []
for i in range(1, 100):
    norm_ratio_list.append(norm_list[i]/norm_list[i-1])

d2l.plot(tf.range(1, 100), norm_ratio_list, 'Iteration', 'Ratio')
../_images/output_eigendecomposition_ee2e00_93_0.svg

22.2.7. Discussion

We now see exactly what we hoped for! After normalizing the matrices by the principal eigenvalue, we see that the random data does not explode as before, but rather eventually equilibrates to a specific value. It would be nice to be able to do these things from first principles, and it turns out that if we look deeply at the mathematics of it, we can see that the largest eigenvalue of a large random matrix with independent mean zero, variance one Gaussian entries is on average about \(\sqrt{n}\), or in our case \(\sqrt{5} \approx 2.2\), due to a fascinating fact known as the circular law (Ginibre, 1965). The relationship between the eigenvalues (and a related object called singular values) of random matrices has been shown to have deep connections to proper initialization of neural networks as was discussed in Pennington et al. (2017) and subsequent works.

22.2.8. Summary

  • Eigenvectors are vectors which are stretched by a matrix without changing direction.

  • Eigenvalues are the amount that the eigenvectors are stretched by the application of the matrix.

  • The eigendecomposition of a matrix can allow for many operations to be reduced to operations on the eigenvalues.

  • The Gershgorin Circle Theorem can provide approximate values for the eigenvalues of a matrix.

  • The behavior of iterated matrix powers depends primarily on the size of the largest eigenvalue. This understanding has many applications in the theory of neural network initialization.

22.2.9. Exercises

  1. What are the eigenvalues and eigenvectors of

    (22.2.21)\[\begin{split}\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}?\end{split}\]
  2. What are the eigenvalues and eigenvectors of the following matrix, and what is strange about this example compared to the previous one?

    (22.2.22)\[\begin{split}\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix}.\end{split}\]
  3. Without computing the eigenvalues, is it possible that the smallest eigenvalue of the following matrix is less that \(0.5\)? Note: this problem can be done in your head.

    (22.2.23)\[\begin{split}\mathbf{A} = \begin{bmatrix} 3.0 & 0.1 & 0.3 & 1.0 \\ 0.1 & 1.0 & 0.1 & 0.2 \\ 0.3 & 0.1 & 5.0 & 0.0 \\ 1.0 & 0.2 & 0.0 & 1.8 \end{bmatrix}.\end{split}\]