Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors #

  • Eigenvalues give scaling.
  • Eigenvectors define invariant directions of transformation.

Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.

From lectures: matrix multiplication represents a transformation of space.
Most vectors change direction and magnitude.
Some special vectors only scale.
These are eigenvectors.

Key Idea: A matrix transformation stretches or compresses vectors. Eigenvectors are directions that remain unchanged. Eigenvalues tell how much scaling happens.


Definition #

Let:

\[ A \in \mathbb{R}^{n \times n} \]

A scalar:

\[ \lambda \in \mathbb{R} \]

and a non-zero vector:

\[ \mathbf{x} \in \mathbb{R}^n \setminus \{0\} \]

is an eigenvector if:

\[ A\mathbf{x} = \lambda \mathbf{x} \]

Intuition (Lecture + Webinar) #

  • Matrix = transformation
  • Most vectors → rotate + scale
  • Eigenvectors → only scale

Eigenvectors define the “natural directions” of a matrix.


Geometric Interpretation #

EigenvalueMeaning
> 1Stretch
between 0 and 1Shrink
= 1No change
= 0Collapse
< 0Flip

Equivalent Characterisations #

The following are equivalent:

\[ (A - \lambda I)\mathbf{x} = 0 \] \[ \operatorname{rank}(A - \lambda I) < n \] \[ \det(A - \lambda I) = 0 \]

Characteristic Polynomial #

\[ p_A(\lambda) = \det(A - \lambda I) \]

Eigenvalues are roots.


How to Find Eigenvalues #

\[ \det(A - \lambda I) = 0 \]

Finding Eigenvectors #

\[ (A - \lambda I)\mathbf{x} = 0 \]

Eigenvectors lie in:

\[ \operatorname{Null}(A - \lambda I) \]

Scaling Property #

\[ c\mathbf{x}, \quad c \neq 0 \]

Eigenspace #

\[ E_\lambda = \operatorname{Null}(A - \lambda I) \]

Spectrum #

Set of all eigenvalues = spectrum.


Worked Example #

\[ A = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \]

Eigenvalues:

\[ \lambda = 2, 0 \]

Eigenvectors:

\[ \begin{bmatrix}1\\1\end{bmatrix}, \quad \begin{bmatrix}1\\-1\end{bmatrix} \]

Important Properties #

Distinct Eigenvalues #

Eigenvectors are linearly independent.


Transpose Property #

\[ \det(A - \lambda I) = \det(A^T - \lambda I) \]

Symmetric Matrices #

  • Eigenvalues are real
  • Eigenvectors are orthogonal

Spectral Theorem #

\[ A = Q \Lambda Q^T \]

Diagonalisation Link #

\[ A = P D P^{-1} \]

Machine Learning Connection #

\[ A^T A \]

is symmetric positive definite:

\[ x^T A^T A x = \|Ax\|^2 > 0 \]

Used in:

  • PCA
  • Regression
  • SVD

Common Exam Questions #

  • Find eigenvalues
  • Find eigenvectors
  • Check diagonalisation
  • Link with null space
  • Interpret geometrically

Hidden Exam Pattern #

  • Concept + computation
  • Links with rank, null space, SVD

Mistakes to Avoid #

  • Zero vector ❌
  • Missing determinant condition
  • Not solving null space fully
  • Ignoring multiplicity

Strategy to Prepare #

  1. Practice determinant
  2. Solve systems
  3. Connect to rank
  4. Link with diagonalisation

Summary #

  • Eigenvalues are roots of the characteristic polynomial
  • Eigenspace is the nullspace of \( A - \lambda I \)
  • Symmetric matrices have real eigenvalues
  • Distinct eigenvalues imply independence
  • \( A^T A \) is symmetric positive definite (full rank case)
  • Spectral theorem provides orthonormal eigenbasis

Home | Matrix Decompositions