Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors #

  • Eigenvalues give scaling.
  • Eigenvectors define invariant directions of transformation.

Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.

Let \( A \in \mathbb{R}^{n \times n} \) .

A scalar \( \lambda \in \mathbb{R} \) is an eigenvalue of \( A \) , and a non-zero vector
\( \mathbf{x} \in \mathbb{R}^n \setminus \{0\} \)
is an eigenvector corresponding to \( \lambda \) if:

\[ A\mathbf{x} = \lambda \mathbf{x} \]

This is called the eigenvalue equation.

They are fundamental in:

  • PCA
  • Optimisation
  • Spectral methods
  • Stability analysis
  • Least squares
  • Neural networks

Equivalent Characterisations #

The following statements are equivalent:

  1. \( \lambda \) is an eigenvalue of \( A \) .

  2. There exists \( \mathbf{x} \neq 0 \) such that:

\[ (A - \lambda I)\mathbf{x} = 0 \]
  1. The system has a non-trivial solution.

  2. Rank condition:

\[ \operatorname{rank}(A - \lambda I) < n \]
  1. Determinant condition:
\[ \det(A - \lambda I) = 0 \]

Characteristic Polynomial #

\[ p_A(\lambda) = \det(A - \lambda I) \]

Eigenvalues are the roots of the characteristic polynomial.


Scaling Property #

If \( \mathbf{x} \) is an eigenvector corresponding to \( \lambda \) , then:

\[ c\mathbf{x}, \quad c \in \mathbb{R} \setminus \{0\} \]

is also an eigenvector.

Eigenvectors are defined up to scaling.


Worked Example #

Consider:

\[ A = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \]

Step 1: Characteristic Polynomial #

\[ \det(A - \lambda I) = \begin{vmatrix} 1-\lambda & 1 \\ 1 & 1-\lambda \end{vmatrix} = (1-\lambda)^2 - 1 \]

Solve:

\[ (1-\lambda)^2 - 1 = 0 \]

Eigenvalues:

\[ \lambda = 2, \quad 0 \]

Step 2: Eigenvectors #

For \( \lambda = 0 \) :

\[ \mathbf{x} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \]

For \( \lambda = 2 \) :

\[ \mathbf{x} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \]

Eigenspace #

For eigenvalue \( \lambda \) , the eigenspace is:

\[ E_\lambda = \operatorname{Null}(A - \lambda I) \]

It is a subspace of \( \mathbb{R}^n \) .


Spectrum #

The set of all eigenvalues of \( A \) is called the spectrum of \( A \) .


Important Properties #

Transpose Property #

\[ \det(A - \lambda I) = \det(A^T - \lambda I) \]

Therefore, \( A \) and \( A^T \) have the same eigenvalues.


Distinct Eigenvalues #

If an \( n \times n \) matrix has \( n \) distinct eigenvalues, its eigenvectors are linearly independent.


Identity Matrix Example #

For \( I_n \) :

\[ I_n \mathbf{x} = 1 \cdot \mathbf{x} \]
  • Only eigenvalue: \( \lambda = 1 \)
  • Eigenspace: \( \mathbb{R}^n \)

Symmetric Matrices #

If \( A \) is symmetric:

  • All eigenvalues are real
  • Eigenvectors for distinct eigenvalues are orthogonal

Spectral Theorem #

If \( A \in \mathbb{R}^{n \times n} \) is symmetric:

  • There exists an orthonormal basis of eigenvectors
  • All eigenvalues are real

Diagonalisation:

\[ A = Q \Lambda Q^T \]

Where:

  • \( Q \) is orthogonal
  • \( \Lambda \) is diagonal

Machine Learning Connection #

If \( A \in \mathbb{R}^{m \times n} \) , then:

\[ A^T A \]

is symmetric and positive definite (if \( \operatorname{rank}(A)=n \) ), because:

\[ \mathbf{x}^T A^T A \mathbf{x} = \|A\mathbf{x}\|^2 > 0 \]

Appears in:

  • Linear regression
  • Normal equations
  • PCA

Summary #

  • Eigenvalues are roots of the characteristic polynomial
  • Eigenspace is the nullspace of \( A - \lambda I \)
  • Symmetric matrices have real eigenvalues
  • Distinct eigenvalues imply independence
  • \( A^T A \) is symmetric positive definite (full rank case)
  • Spectral theorem provides orthonormal eigenbasis

Home | Matrix Decompositions