February 26, 2026Gradient Descent Algorithm
#
Gradient Descent Algorithm (GDA) is
- an optimisation method
- used to train models
- by repeatedly updating parameters (weights and biases) to reduce the loss
In deep learning, the default training approach is almost always mini-batch gradient descent, usually with Adam or SGD + momentum.
Gradient Descent is used in both regression and classification.
It’s not tied to the task type — it’s tied to the fact you have:
February 21, 2026Gradient Descent for Linear Regression
#
Gradient descent is an iterative optimisation method used to minimise the regression cost function by repeatedly updating parameters in the direction that reduces error.
- Iterative method
- Types: batch / stochastic / mini-batch
Key takeaway:
Gradient descent starts with initial parameter values and repeatedly updates them using the gradient until the cost stops decreasing.
flowchart TD
GD["Gradient<br/>Descent"] -->|minimises| CF["Cost<br/>function"]
GD -->|updates| W["Parameters<br/>(weights)"]
GD -->|uses| GR["Gradient<br/>(slope)"]
GD --> H["Hyperparameters"]
H --> LR["Learning<br/>rate"]
H --> BS["Batch<br/>size"]
H --> EP["Epochs"]
style GD fill:#90CAF9,stroke:#1E88E5,color:#000
style CF fill:#CE93D8,stroke:#8E24AA,color:#000
style W fill:#CE93D8,stroke:#8E24AA,color:#000
style GR fill:#CE93D8,stroke:#8E24AA,color:#000
style H fill:#CE93D8,stroke:#8E24AA,color:#000
style LR fill:#CE93D8,stroke:#8E24AA,color:#000
style BS fill:#CE93D8,stroke:#8E24AA,color:#000
style EP fill:#CE93D8,stroke:#8E24AA,color:#000
Types of GD
#
flowchart TD
T["Gradient Descent<br/>types"] --> BGD["Batch<br/>GD"]
T --> SGD["Stochastic<br/>GD"]
T --> MGD["Mini-batch<br/>GD"]
BGD --> ALL["All data<br/>per step"]
BGD --> STB["Smooth<br/>updates"]
SGD --> ONE["1 sample<br/>per step"]
SGD --> FAST["Quick<br/>progress"]
SGD --> NOISE["Noisy<br/>updates"]
MGD --> MB["Small batch<br/>per step"]
MGD --> PRACT["Practical<br/>default"]
style T fill:#90CAF9,stroke:#1E88E5,color:#000
style BGD fill:#C8E6C9,stroke:#2E7D32,color:#000
style SGD fill:#C8E6C9,stroke:#2E7D32,color:#000
style MGD fill:#C8E6C9,stroke:#2E7D32,color:#000
style ALL fill:#CE93D8,stroke:#8E24AA,color:#000
style STB fill:#CE93D8,stroke:#8E24AA,color:#000
style ONE fill:#CE93D8,stroke:#8E24AA,color:#000
style FAST fill:#CE93D8,stroke:#8E24AA,color:#000
style NOISE fill:#CE93D8,stroke:#8E24AA,color:#000
style MB fill:#CE93D8,stroke:#8E24AA,color:#000
style PRACT fill:#CE93D8,stroke:#8E24AA,color:#000
Batch
#
- Use only if you have huge compute and a lot of time to train
SGD
#
March 12, 2026Hypothesis Testing
#
Hypothesis testing is a structured way to decide:
Is what we see in a sample just random variation,
or is there evidence of a real effect in the population?
Hypothesis Testing topic sits inside inferential statistics:
we use a sample to make a statement about a population.
- Sampling (random and stratified)
- Sampling distribution and Central Limit Theorem
- Estimation (confidence intervals and confidence level)
- Testing hypotheses (mean, proportion, ANOVA)
- Maximum likelihood (MLE)
Key takeaway:
The logic is always the same:
February 15, 2026Linear NN for Classification
#
A Linear Neural Network (LNN) for classification uses no hidden layers.
It learns a linear decision boundary and outputs class probabilities, then converts them into predicted classes.
Neural-network view:
- Binary classification → logistic regression (single neuron + sigmoid)
- Multi-class classification → softmax regression (K output neurons + softmax)
flowchart LR
D["Data<br/>X, y"] --> M["Linear model<br/>w, b"]
M --> A["Activation<br/>Sigmoid / Softmax"]
A --> L["Loss<br/>Cross-entropy"]
L --> O["Optimiser<br/>Mini-batch GD / Adam"]
O --> P["Updated parameters<br/>w, b"]
P --> I["Inference<br/>Probabilities → class"]
%% Pastel colour scheme
style D fill:#E3F2FD,stroke:#1E88E5,stroke-width:1px
style M fill:#E8F5E9,stroke:#43A047,stroke-width:1px
style A fill:#FFF3E0,stroke:#FB8C00,stroke-width:1px
style L fill:#FCE4EC,stroke:#D81B60,stroke-width:1px
style O fill:#F3E5F5,stroke:#8E24AA,stroke-width:1px
style P fill:#E0F7FA,stroke:#00838F,stroke-width:1px
style I fill:#F1F8E9,stroke:#558B2F,stroke-width:1px
Classification
#
Classification predicts a discrete class label.
Common settings:
Linear models for Classification
#
- categorises data by finding a linear boundary (hyperplane) that separates classes
- calculating a weighted sum of input features plus bias
flowchart TD
T["Linear<br/>classification<br/>models"] --> P["Perceptron"]
T --> LR["Logistic<br/>regression"]
T --> SVM["Linear<br/>SVM"]
P -->|uses| STEP["Step<br/>activation"]
LR -->|uses| SIG["Sigmoid<br/>+ log loss"]
SVM -->|uses| HNG["Hinge<br/>loss"]
style T fill:#90CAF9,stroke:#1E88E5,color:#000
style P fill:#C8E6C9,stroke:#2E7D32,color:#000
style LR fill:#C8E6C9,stroke:#2E7D32,color:#000
style SVM fill:#C8E6C9,stroke:#2E7D32,color:#000
style STEP fill:#CE93D8,stroke:#8E24AA,color:#000
style SIG fill:#CE93D8,stroke:#8E24AA,color:#000
style HNG fill:#CE93D8,stroke:#8E24AA,color:#000
Discriminant Functions
#
Decision Theory
#
Probabilistic Discriminative Classifiers
#
Logistic Regression
#
- Supervised machine learning algorithm
- Binary classification algorithm
- requires data to be linearly separable
- predicts the probability that an input belongs to a specific class
- uses Sigmoid function to convert inputs into a probability value between 0 and 1
Key takeaway:
Logistic regression predicts $P(y=1\mid x)$ using a sigmoid of a linear score $z=w\cdot x+b$,
then learns $w,b$ by maximising likelihood (equivalently minimising log-loss).
December 14, 2025Foundation Model
#
AI models trained on massive datasets to perform a wide range of tasks with minimal fine-tuning.
are large deep learning neural networks
are large AI models trained on massive and diverse datasets (text, images, audio, or multiple modalities).
Contain millions or billions of parameters.
designed to perform a broad range of general tasks
designed for general-purpose intelligence, not a single task.
acts as base models for building specialised AI applications
LLM – Large Language Model
#
Large Language Models (LLMs) are advanced AI systems designed to process, understand, and generate human-like text.
They learn language by analysing massive amounts of text data, discovering patterns in:
grammar
meaning
context
relationships between words and sentences
Built on Deep Learning
Implemented using Neural Networks
Based on Transformers
Often combined with tools like:
- Retrieval (RAG)
- Agents
- External APIs
- Memory systems
What makes an LLM special?
#
- Built using deep neural networks
- Trained on very large datasets (books, articles, code, web text)
- Can perform many tasks without task-specific training
- General-purpose language understanding, not single-task models
LLMs are based on the Transformer Architecture, which allows models to understand context and long-range dependencies in text.
December 15, 2025AI Agents
#
Also referred to as Agentic AI.
AI agents are intelligent systems that can plan, make decisions, and take actions to achieve goals with minimal human intervention.
A common use case is task automation
for example booking travel based on a user’s request.
AI agents typically build on Generative AI and use Large Language Models (LLMs) as the reasoning core.
Agents often interact with tools (APIs, databases, calendars) to complete multi-step workflows.
Retrieval-Augmented Generation (RAG)
#
Retrieval-Augmented Generation (RAG) is a system design pattern that improves an LLM’s answers by:
- Retrieving relevant information from an external knowledge source, and then
- Augmenting the LLM prompt with that retrieved context before generating the final response.
RAG helps an LLM look things up first, then answer using evidence.
Why RAG is Useful
#
RAG is commonly used when:
- Your knowledge is in private documents (PDFs, policies, internal wiki)
- You need up-to-date information (things not in the model’s training data)
- You want fewer hallucinations by grounding answers in retrieved sources
- You want traceability (show “where the answer came from”)
RAG does not change the model weights.
It changes what the model sees at inference time by adding retrieved context.
March 18, 2026Mathematical Foundations for Machine Learning
#
Machine Learning is built on mathematical principles that allow models to:
- represent data
- learn patterns
- optimise performance
flowchart LR
DATA[Data]
MATH[Math Models]
OPT[Optimisation]
MODEL[Trained Model]
DATA --> MATH
MATH --> OPT
OPT --> MODEL
ML requires core mathematical tools to understand how ML algorithms work internally. Algebra deals with relationships between variables and quantities, while Calculus focuses on change and optimization.