December 15, 2025Generative AI
#
Generative Artificial Intelligence (GenAI) refers to a class of AI systems that can generate new content such as text, images, audio, video, or code, rather than only making predictions or classifications.
GenAI systems learn patterns and representations from large datasets and use them to produce novel outputs that resemble the data they were trained on.
How Generative AI Differs from Traditional AI
#
| Traditional AI | Generative AI |
|---|
| Predicts or classifies | Generates new content |
| Task-specific models | General-purpose models |
| Fixed outputs | Open-ended outputs |
| Often rule-based | Data-driven and probabilistic |
Core Idea of Generative AI
#
Instead of learning “what label to assign”, Generative AI learns “how data is structured” and then creates new data following that structure.
December 14, 2025Foundation Model
#
AI models trained on massive datasets to perform a wide range of tasks with minimal fine-tuning.
are large deep learning neural networks
are large AI models trained on massive and diverse datasets (text, images, audio, or multiple modalities).
Contain millions or billions of parameters.
designed to perform a broad range of general tasks
designed for general-purpose intelligence, not a single task.
acts as base models for building specialised AI applications
LLM – Large Language Model
#
Large Language Models (LLMs) are advanced AI systems designed to process, understand, and generate human-like text.
They learn language by analysing massive amounts of text data, discovering patterns in:
grammar
meaning
context
relationships between words and sentences
Built on Deep Learning
Implemented using Neural Networks
Based on Transformers
Often combined with tools like:
- Retrieval (RAG)
- Agents
- External APIs
- Memory systems
What makes an LLM special?
#
- Built using deep neural networks
- Trained on very large datasets (books, articles, code, web text)
- Can perform many tasks without task-specific training
- General-purpose language understanding, not single-task models
LLMs are based on the Transformer Architecture, which allows models to understand context and long-range dependencies in text.
Retrieval-Augmented Generation (RAG)
#
Retrieval-Augmented Generation (RAG) is a system design pattern that improves an LLM’s answers by:
- Retrieving relevant information from an external knowledge source, and then
- Augmenting the LLM prompt with that retrieved context before generating the final response.
RAG helps an LLM look things up first, then answer using evidence.
Why RAG is Useful
#
RAG is commonly used when:
- Your knowledge is in private documents (PDFs, policies, internal wiki)
- You need up-to-date information (things not in the model’s training data)
- You want fewer hallucinations by grounding answers in retrieved sources
- You want traceability (show “where the answer came from”)
RAG does not change the model weights.
It changes what the model sees at inference time by adding retrieved context.