Eigenvectors define invariant directions of transformation.
Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.
From lectures:
matrix multiplication represents a transformation of space. Most vectors change direction and magnitude. Some special vectors only scale. These are eigenvectors.
Key Idea:
A matrix transformation stretches or compresses vectors.
Eigenvectors are directions that remain unchanged.
Eigenvalues tell how much scaling happens.
Cholesky decomposition is a special matrix factorisation used for symmetric positive definite matrices.
From lecture discussions, this decomposition is powerful because it reduces a matrix into a triangular form, making computations easier and more stable.
Key Idea:
Cholesky decomposition expresses a matrix as a product of a lower triangular matrix and its transpose.
It is efficient and numerically stable.
Eigen decomposition expresses a matrix using its eigenvectors and eigenvalues.
From lecture discussions, this is one of the most important ways to understand the internal structure of a matrix.
Instead of treating the matrix as a black box, eigen decomposition reveals its fundamental directions and scaling behaviour.
Key Idea:
Eigen decomposition rewrites a matrix in terms of directions (eigenvectors) and scaling factors (eigenvalues).
This makes complex transformations easier to understand and compute.
Diagonalisation expresses a matrix using its eigenvectors and eigenvalues when possible.
From lecture explanation, diagonalisation is one of the most powerful tools because it converts a complicated matrix into a much simpler form.
Instead of working with a full matrix, we work with a diagonal matrix, which is much easier to analyse and compute.
Key Idea:
If a matrix has enough independent eigenvectors, it can be rewritten as a diagonal matrix using a change of basis.
This simplifies matrix operations significantly.
Gradient descent is an optimisation algorithm used to train ML and neural networks.
Gradient descent updates parameters by moving opposite the gradient.
Trains ML models by minimising errors:
between predicted and actual results
by iteratively adjusting its parameters
moves step‑by‑step in the direction of the steepest decrease in the loss function, it helps ML models learn the best possible weights for better predictions