Diagonalization #
Diagonalisation expresses a matrix using its eigenvectors and eigenvalues when possible.
From lecture explanation, diagonalisation is one of the most powerful tools because it converts a complicated matrix into a much simpler form.
Instead of working with a full matrix, we work with a diagonal matrix, which is much easier to analyse and compute.
Key Idea: If a matrix has enough independent eigenvectors, it can be rewritten as a diagonal matrix using a change of basis. This simplifies matrix operations significantly.
Core Idea #
A matrix is diagonalizable if we can write it as:
\[ A = P D P^{-1} \]Where:
- P is a matrix whose columns are eigenvectors
- D is a diagonal matrix containing eigenvalues
Why Diagonalization Works (Lecture Insight) #
From lectures:
- Matrix multiplication represents transformation
- Eigenvectors are directions that do not change direction
- Eigenvalues tell how much scaling happens
So if we express everything in terms of eigenvectors:
- transformation becomes simple scaling
- no mixing of components
Change of Basis Interpretation #
Diagonalization is essentially a change of coordinate system.
\[ D = P^{-1} A P \]Interpretation:
- P converts to eigenvector basis
- A acts as scaling
- P^{-1} converts back
From lecture: this is why diagonal matrices are easy — they scale each coordinate independently.
When is a Matrix Diagonalizable? #
A matrix is diagonalizable if it has enough independent eigenvectors.
\[ \text{number of independent eigenvectors} = n \]Lecture rule:
- Distinct eigenvalues ⇒ automatically diagonalizable
- Repeated eigenvalues ⇒ must check eigenvectors
Algebraic vs Geometric Multiplicity #
From slides:
- Algebraic multiplicity = number of times eigenvalue appears
- Geometric multiplicity = number of independent eigenvectors
Condition:
\[ \text{geometric multiplicity} = \text{algebraic multiplicity} \]for diagonalization.
Special Case: Symmetric Matrices #
From lecture:
Symmetric matrices are always diagonalizable.
Even stronger:
\[ A = Q \Lambda Q^T \]Where:
- Q is orthogonal
- eigenvectors are orthonormal
This is called spectral decomposition.
Why Diagonalization is Useful #
From lecture and webinar:
1. Matrix Powers #
\[ A^k = P D^k P^{-1} \]Easy because D is diagonal.
2. Matrix Inverse #
\[ A^{-1} = P D^{-1} P^{-1} \]3. Understanding Structure #
- Eigenvalues show scaling behaviour
- Eigenvectors show directions
Step-by-Step Method (Exam) #
- Find eigenvalues using:
- Find eigenvectors:
Form matrix P using eigenvectors
Form diagonal matrix D using eigenvalues
Verify:
Example #
Given:
\[ A = \begin{bmatrix} 4 & 2 \\ 1 & 3 \end{bmatrix} \]Steps:
- Find eigenvalues
- Find eigenvectors
- Construct P and D
Geometric Interpretation #
- Original space → transformed space
- Diagonalization aligns axes with eigenvectors
Result:
- transformation becomes pure scaling
Connection to SVD #
From lecture:
- Diagonalization works for square matrices
- SVD generalises this idea
Both aim to simplify matrix structure.
Hidden Exam Pattern #
From lectures:
- Often combined with:
- eigenvalues
- matrix powers
- rank
Common Mistakes #
- Not checking number of eigenvectors
- Assuming repeated eigenvalues ⇒ diagonalizable
- Incorrect inverse of P
- Mixing order of multiplication
Strategy to Prepare #
- Practice eigenvalue problems
- Check independence of eigenvectors
- Practice forming P and D
- Solve matrix power problems
Quick Summary Table #
| Concept | Meaning |
|---|---|
| A = PDP⁻¹ | Diagonalization |
| D | Eigenvalues |
| P | Eigenvectors |
| Condition | n independent eigenvectors |
References #
- Lecture slides (Eigenvalues, Decomposition)
- Course handout
- Webinar discussions