The Cauchy–Schwarz Inequality is one of the most important results in linear algebra.
It places a fundamental bound on the inner product of two vectors.
If you see angle, cosine, similarity, or inner product bounds → think Cauchy–Schwarz Inequality
Key Idea:
The inner product (dot product) can never exceed the product of magnitudes.
This ensures all geometric interpretations (angles, cosine) are valid.
Decompositions reveal structure in matrices and power algorithms like PCA.
Matrix decompositions break complex matrices into simpler parts.
From the lecture introduction, matrices are used to describe mappings and transformations of vectors.
That is why decomposition is important:
it lets us understand a complicated transformation by rewriting it using simpler building blocks.
In the slides, the topic is introduced as part of three closely connected goals:
how to summarise matrices,
how matrices can be decomposed,
and how the decompositions can be used for matrix approximations.
Eigenvectors define invariant directions of transformation.
Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.
From lectures:
matrix multiplication represents a transformation of space. Most vectors change direction and magnitude. Some special vectors only scale. These are eigenvectors.
Key Idea:
A matrix transformation stretches or compresses vectors.
Eigenvectors are directions that remain unchanged.
Eigenvalues tell how much scaling happens.
Cholesky decomposition is a special matrix factorisation used for symmetric positive definite matrices.
From lecture discussions, this decomposition is powerful because it reduces a matrix into a triangular form, making computations easier and more stable.
Key Idea:
Cholesky decomposition expresses a matrix as a product of a lower triangular matrix and its transpose.
It is efficient and numerically stable.
Eigen decomposition expresses a matrix using its eigenvectors and eigenvalues.
From lecture discussions, this is one of the most important ways to understand the internal structure of a matrix.
Instead of treating the matrix as a black box, eigen decomposition reveals its fundamental directions and scaling behaviour.
Key Idea:
Eigen decomposition rewrites a matrix in terms of directions (eigenvectors) and scaling factors (eigenvalues).
This makes complex transformations easier to understand and compute.
Diagonalisation expresses a matrix using its eigenvectors and eigenvalues when possible.
From lecture explanation, diagonalisation is one of the most powerful tools because it converts a complicated matrix into a much simpler form.
Instead of working with a full matrix, we work with a diagonal matrix, which is much easier to analyse and compute.
Key Idea:
If a matrix has enough independent eigenvectors, it can be rewritten as a diagonal matrix using a change of basis.
This simplifies matrix operations significantly.