Machine Learning

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD) #

Singular Value Decomposition (SVD) is one of the most important matrix decomposition techniques in linear algebra and machine learning.

It factorises any matrix into three simpler matrices that reveal its structure.

Key Idea: SVD decomposes a matrix into rotations + scaling. It tells us how data is transformed along orthogonal directions.


Definition #

For any matrix in real space: \[ A \in \mathbb{R}^{m \times n} \]

Vector Calculus

Vector Calculus #

Vector calculus extends differentiation to multivariate and vector-valued functions.

Gradients power learning. This section builds differentiation skills needed for backpropagation.



flowchart TD

    %% Core Node
    PD["Partial Derivatives"]

    %% Supporting Concepts
    DQ["Difference Quotient"]
    JH["Jacobian / Hessian"]
    TS["Taylor Series"]

    %% Application Chapters
    CH6["<br/>Probability"]
    CH7["<br/>Optimization"]
    CH9["<br/>Regression"]
    CH10["<br/>Dimensionality Reduction"]
    CH11["<br/>Density Estimation"]
    CH12["<br/>Classification"]

    %% Relationships
    DQ -->|defines| PD
    PD -->|collected in| JH
    JH -->|used in| TS
    JH -->|used in| CH6
	
    PD -->|used in| CH7
    PD -->|used in| CH9
    PD -->|used in| CH10
    PD -->|used in| CH11
    PD -->|used in| CH12

    %% Styling (Your Soft Academic Palette)
    style PD fill:#90CAF9,stroke:#1E88E5,color:#000

    style DQ fill:#CE93D8,stroke:#8E24AA,color:#000
    style JH fill:#CE93D8,stroke:#8E24AA,color:#000
    style TS fill:#CE93D8,stroke:#8E24AA,color:#000
    style CH6 fill:#CE93D8,stroke:#8E24AA,color:#000
	
    style CH7 fill:#C8E6C9,stroke:#2E7D32,color:#000
    style CH9 fill:#C8E6C9,stroke:#2E7D32,color:#000
    style CH10 fill:#C8E6C9,stroke:#2E7D32,color:#000
    style CH11 fill:#C8E6C9,stroke:#2E7D32,color:#000
    style CH12 fill:#C8E6C9,stroke:#2E7D32,color:#000


Home | Calculus

Optimisation using Gradient Descent

Optimisation using Gradient Descent #

Gradient descent is an optimisation algorithm used to train ML and neural networks.

  • Gradient descent updates parameters by moving opposite the gradient.

Trains ML models by minimising errors:

  • between predicted and actual results
  • by iteratively adjusting its parameters
  • moves step‑by‑step in the direction of the steepest decrease in the loss function, it helps ML models learn the best possible weights for better predictions

Types of Gradient Gescent learning algorithms #

  1. Batch gradient descent
  2. Stochastic gradient descent
  3. Mini-batch gradient descent

Home | Continuous Optimisation