<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Linear Algebra on Arshad Siddiqui</title><link>https://arshadhs.github.io/tags/linear-algebra/</link><description>Recent content in Linear Algebra on Arshad Siddiqui</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 18 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://arshadhs.github.io/tags/linear-algebra/index.xml" rel="self" type="application/rss+xml"/><item><title>Basis and Rank</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/020-basis-and-rank/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/020-basis-and-rank/</guid><description>&lt;h1 id="basis-and-rank">
 Basis and Rank
 
 &lt;a class="anchor" href="#basis-and-rank">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>basis&lt;/strong> is a minimal set of linearly independent vectors that spans a space.&lt;/p>
&lt;p>The &lt;strong>dimension&lt;/strong> of a space is the number of vectors in a basis.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Basis = independence + spanning.
Rank tells us how many independent directions exist in a matrix.&lt;/p>
&lt;/blockquote>
&lt;p>A basis must satisfy two conditions ⭐&lt;/p>
&lt;ol>
&lt;li>Vectors must be linearly independent&lt;/li>
&lt;li>Vectors must span the space&lt;/li>
&lt;/ol>
&lt;p>This means:&lt;/p>
&lt;ul>
&lt;li>No redundancy (independence)&lt;/li>
&lt;li>Full coverage (spanning)&lt;/li>
&lt;/ul>

&lt;span>
 \[ 
\text{Span}(v_1, v_2, \dots, v_k) = V
 \]
 &lt;/span>



&lt;span>
 \[ 
c_1 v_1 + \cdots + c_k v_k = 0 \Rightarrow c_i = 0
 \]
 &lt;/span>


&lt;hr>
&lt;h2 id="why-basis-matters">
 Why Basis Matters
 
 &lt;a class="anchor" href="#why-basis-matters">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Represents space efficiently&lt;/li>
&lt;li>Removes redundancy&lt;/li>
&lt;li>Helps define coordinates&lt;/li>
&lt;li>Used in ML for feature representation&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="dimension">
 Dimension
 
 &lt;a class="anchor" href="#dimension">#&lt;/a>
 
&lt;/h1>
&lt;p>Dimension is the number of vectors in a basis.&lt;/p></description></item><item><title>Linear Independence</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/</guid><description>&lt;h1 id="linear-independence">
 Linear Independence
 
 &lt;a class="anchor" href="#linear-independence">#&lt;/a>
 
&lt;/h1>
&lt;p>A set of vectors is &lt;strong>linearly independent&lt;/strong> if none of them can be written as a linear combination of the others.&lt;/p>

&lt;span style="color: green;">
 &lt;span>
 \[ 
c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0}
\;\Rightarrow\;
c_1=\cdots=c_k=0
 \]
 &lt;/span>

&lt;/span>
&lt;p>Independence means each vector adds &lt;strong>new information&lt;/strong>.&lt;/p>
&lt;h2 id="why-it-matters">
 Why it matters
 
 &lt;a class="anchor" href="#why-it-matters">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Detects redundancy&lt;/li>
&lt;li>Connects to rank and basis&lt;/li>
&lt;/ul>
&lt;p>If one vector can already be formed using others, it does not add anything new.&lt;/p></description></item><item><title>Norm</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/030-norm/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/030-norm/</guid><description>&lt;h1 id="norm">
 Norm
 
 &lt;a class="anchor" href="#norm">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>norm&lt;/strong> measures the length (magnitude) of a vector.&lt;/p>
&lt;ul>
&lt;li>the norm of a vector x measures the distance from the origin to the point x.&lt;/li>
&lt;/ul>
&lt;p>Common example: Euclidean norm.&lt;/p>

&lt;span>
 \[ 
\lVert \mathbf{x} \rVert_2 = \sqrt{x_1^2 + \cdots + x_n^2}
 \]
 &lt;/span>


&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Norm = measure of size or length of a vector.
It generalises the idea of distance in geometry to higher dimensions.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="common-norms">
 Common norms
 
 &lt;a class="anchor" href="#common-norms">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>L1&lt;/li>
&lt;li>L2&lt;/li>
&lt;li>Infinity norm&lt;/li>
&lt;/ul>
&lt;h2 id="why-it-matters">
 Why it matters
 
 &lt;a class="anchor" href="#why-it-matters">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>norms quantify size&lt;/li>
&lt;li>are used in distances and regularisation.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="intuition-from-lectures">
 Intuition (From Lectures)
 
 &lt;a class="anchor" href="#intuition-from-lectures">#&lt;/a>
 
&lt;/h1>
&lt;p>From lecture discussions on analytic geometry:&lt;/p></description></item><item><title>Inner Products and Dot Product</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/</guid><description>&lt;h1 id="inner-products-and-dot-product">
 Inner Products and Dot Product
 
 &lt;a class="anchor" href="#inner-products-and-dot-product">#&lt;/a>
 
&lt;/h1>
&lt;p>An &lt;strong>inner product&lt;/strong> maps two vectors to a &lt;strong>single scalar&lt;/strong>.&lt;/p>
&lt;p>It allows us to measure:&lt;/p>
&lt;ul>
&lt;li>similarity&lt;/li>
&lt;li>vector length&lt;/li>
&lt;li>projections&lt;/li>
&lt;li>orthogonality&lt;/li>
&lt;/ul>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart TD
T[&amp;#34;Inner&amp;lt;br/&amp;gt;products&amp;lt;br/&amp;gt;(types)&amp;#34;] --&amp;gt; DOT[&amp;#34;Euclidean&amp;lt;br/&amp;gt;Dot product&amp;#34;]
T --&amp;gt; WIP[&amp;#34;Weighted&amp;lt;br/&amp;gt;inner product&amp;#34;]
T --&amp;gt; FN[&amp;#34;Function-space&amp;lt;br/&amp;gt;(integral)&amp;#34;]
T --&amp;gt; HERM[&amp;#34;Complex&amp;lt;br/&amp;gt;Hermitian&amp;#34;]
T --&amp;gt; MAT[&amp;#34;Matrix&amp;lt;br/&amp;gt;inner product&amp;lt;br/&amp;gt;(Frobenius)&amp;#34;]

DOT --&amp;gt; Rn[&amp;#34;Vectors in&amp;lt;br/&amp;gt;
&amp;lt;span&amp;gt;
 \( \mathbb{R}^n \)
 &amp;lt;/span&amp;gt;

&amp;#34;]
WIP --&amp;gt; SPD[&amp;#34;SPD matrix&amp;lt;br/&amp;gt;W&amp;#34;]
FN --&amp;gt; L2[&amp;#34;L2 space&amp;lt;br/&amp;gt;functions&amp;#34;]
HERM --&amp;gt; Cn[&amp;#34;Vectors in&amp;lt;br/&amp;gt;C^n&amp;#34;]
MAT --&amp;gt; Mnm[&amp;#34;Matrices&amp;lt;br/&amp;gt;R^{m×n}&amp;#34;]

style T fill:#90CAF9,stroke:#1E88E5,color:#000

style DOT fill:#C8E6C9,stroke:#2E7D32,color:#000
style WIP fill:#C8E6C9,stroke:#2E7D32,color:#000
style FN fill:#C8E6C9,stroke:#2E7D32,color:#000
style HERM fill:#C8E6C9,stroke:#2E7D32,color:#000
style MAT fill:#C8E6C9,stroke:#2E7D32,color:#000

style Rn fill:#CE93D8,stroke:#8E24AA,color:#000
style SPD fill:#CE93D8,stroke:#8E24AA,color:#000
style L2 fill:#CE93D8,stroke:#8E24AA,color:#000
style Cn fill:#CE93D8,stroke:#8E24AA,color:#000
style Mnm fill:#CE93D8,stroke:#8E24AA,color:#000
&lt;/pre>

&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>For vectors&lt;br>

&lt;span>
 \( \mathbf{a}, \mathbf{b} \in \mathbb{R}^n \)
 &lt;/span>

&lt;/p></description></item><item><title>Lengths and Distances</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/050-lengths-and-distances/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/050-lengths-and-distances/</guid><description>&lt;h1 id="lengths-and-distances">
 Lengths and Distances
 
 &lt;a class="anchor" href="#lengths-and-distances">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>length&lt;/strong> of a vector is given by its norm.&lt;/p>
&lt;p>The &lt;strong>distance&lt;/strong> between two points (vectors) is the norm of their difference.&lt;/p>
&lt;p>Distance quantifies &lt;strong>how far two vectors (data points) are&lt;/strong> from each other.&lt;/p>

&lt;span>
 \[ 
d(\mathbf{x},\mathbf{y}) = \lVert \mathbf{x} - \mathbf{y} \rVert
 \]
 &lt;/span>


&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Length measures size of a single vector.
Distance measures separation between two vectors.
Distance = norm applied to difference.&lt;/p></description></item><item><title>Angles and Orthogonality</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/060-angles-and-orthogonality/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/060-angles-and-orthogonality/</guid><description>&lt;h1 id="angles-and-orthogonality">
 Angles and Orthogonality
 
 &lt;a class="anchor" href="#angles-and-orthogonality">#&lt;/a>
 
&lt;/h1>
&lt;p>Once we define an inner product, we can define the &lt;strong>angle between two vectors&lt;/strong>.&lt;/p>
&lt;p>Angles allow us to measure how aligned or different two vectors are in space.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Angle measures similarity between vectors.
Orthogonality means complete independence (no similarity).&lt;/p>
&lt;/blockquote>
&lt;h2 id="why-it-matters-in-machine-learning">
 Why It Matters in Machine Learning
 
 &lt;a class="anchor" href="#why-it-matters-in-machine-learning">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>PCA produces orthogonal components&lt;/li>
&lt;li>Orthogonal features reduce redundancy&lt;/li>
&lt;li>Gradient directions depend on angle&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="angle-formula">
 Angle Formula
 
 &lt;a class="anchor" href="#angle-formula">#&lt;/a>
 
&lt;/h1>
&lt;p>For vectors in n-dimensional space:&lt;/p></description></item><item><title>Orthonormal Basis</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/070-orthonormal-basis/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/070-orthonormal-basis/</guid><description>&lt;h1 id="orthonormal-basis">
 Orthonormal Basis
 
 &lt;a class="anchor" href="#orthonormal-basis">#&lt;/a>
 
&lt;/h1>
&lt;p>A basis is &lt;strong>orthonormal&lt;/strong> if its vectors are:&lt;/p>
&lt;ul>
&lt;li>orthogonal to each other&lt;/li>
&lt;li>each has unit length&lt;/li>
&lt;/ul>

&lt;span>
 \[ 
\langle \mathbf{e}_i, \mathbf{e}_j \rangle =
\begin{cases}
1 &amp; i=j \\
0 &amp; i\ne j
\end{cases}
 \]
 &lt;/span>


&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Orthonormal basis = perfectly independent + perfectly scaled.
This makes computations extremely simple and stable.&lt;/p>
&lt;/blockquote>
&lt;p>Why it matters: orthonormal bases make projections and computations simple.&lt;/p>
&lt;hr>
&lt;h1 id="intuition-from-lectures">
 Intuition (From Lectures)
 
 &lt;a class="anchor" href="#intuition-from-lectures">#&lt;/a>
 
&lt;/h1>
&lt;p>From analytic geometry and vector space lectures:&lt;/p></description></item><item><title>Mathematical Foundation</title><link>https://arshadhs.github.io/docs/ai/maths/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/</guid><description>&lt;h1 id="mathematical-foundations-for-machine-learning">
 Mathematical Foundations for Machine Learning
 
 &lt;a class="anchor" href="#mathematical-foundations-for-machine-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Machine Learning is built on &lt;strong>mathematical principles&lt;/strong> that allow models to:&lt;/p>
&lt;ul>
&lt;li>represent data&lt;/li>
&lt;li>learn patterns&lt;/li>
&lt;li>optimise performance&lt;/li>
&lt;/ul>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart LR
 DATA[Data]
 MATH[Math Models]
 OPT[Optimisation]
 MODEL[Trained Model]

 DATA --&amp;gt; MATH
 MATH --&amp;gt; OPT
 OPT --&amp;gt; MODEL
&lt;/pre>

&lt;p>ML requires &lt;strong>core mathematical tools&lt;/strong> to understand how ML algorithms work internally. Algebra deals with relationships between variables and quantities, while Calculus focuses on change and optimization.&lt;/p></description></item><item><title>Linear Algebra</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/</guid><description>&lt;h1 id="linear-algebra">
 Linear Algebra
 
 &lt;a class="anchor" href="#linear-algebra">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>study of vectors and matrices&lt;/strong> is called Linear Algebra.&lt;/p>
&lt;p>Linear Algebra provides the &lt;strong>mathematical language&lt;/strong> used &lt;strong>to represent data, transformations, and structure&lt;/strong> in ML.&lt;/p>




&lt;ul>
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/">Linear Systems&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/">Systems of Linear Equations&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/">Matrices&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/matrix-transposition/">Matrix Transposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/">Solving Linear Systems&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/">Forward and Backward Substitution&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/">Inverse Matrix&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/convex/">Convex Combination&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/">Vector Spaces&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/020-basis-and-rank/">Basis and Rank&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/">Linear Independence&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/030-norm/">Norm&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/">Inner Products and Dot Product&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/050-lengths-and-distances/">Lengths and Distances&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/060-angles-and-orthogonality/">Angles and Orthogonality&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/070-orthonormal-basis/">Orthonormal Basis&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/">Feature Space&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/">Cauchy–Schwarz&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/">Matrix Decompositions&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/special-matrices/">Special Matrices&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/">Characteristic Polynomial&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/">Determinant and Trace&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/">Eigenvalues and Eigenvectors&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/">Cholesky Decomposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/">Eigen Decomposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/">Diagonalization&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/">Singular Value Decomposition (SVD)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/">Matrix Approximation&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">Dimensionality reduction and PCA&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/">Principal Component Analysis (PCA)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/">PCA Theory&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/">PCA in Practice&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/">Latent Variable Perspective&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/">Mathematical Preliminaries of SVM&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/">Nonlinear SVM and Kernels&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
&lt;/ul>


&lt;hr>
&lt;h2 id="why-linear-algebra-matters-in-ml">
 Why Linear Algebra Matters in ML
 
 &lt;a class="anchor" href="#why-linear-algebra-matters-in-ml">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Every machine learning model uses matrices&lt;/li>
&lt;li>All data in ML is represented using &lt;strong>vectors and matrices&lt;/strong>&lt;/li>
&lt;li>Neural networks are pipelines of matrix operations&lt;/li>
&lt;li>Models apply &lt;strong>matrix transformations&lt;/strong> to data&lt;/li>
&lt;li>Optimisation relies on linear algebra operations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="what-to-learn">
 What to Learn
 
 &lt;a class="anchor" href="#what-to-learn">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Scalars, vectors, and matrices&lt;/li>
&lt;li>Vector operations (addition, dot product)&lt;/li>
&lt;li>Matrix multiplication &lt;em>(critical)&lt;/em>&lt;/li>
&lt;li>Identity matrices and transpose&lt;/li>
&lt;li>Eigenvalues and eigenvectors &lt;em>(conceptual understanding)&lt;/em>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;ul>
&lt;li>&lt;strong>Scalar&lt;/strong> → a number&lt;/li>
&lt;li>&lt;strong>Vector&lt;/strong> → a directed point&lt;/li>
&lt;li>&lt;strong>Matrix&lt;/strong> → a space transformer&lt;/li>
&lt;li>&lt;strong>Linear transformation&lt;/strong> → structured mapping&lt;/li>
&lt;li>&lt;strong>Feature&lt;/strong> → one axis&lt;/li>
&lt;li>&lt;strong>Feature space&lt;/strong> → where data lives&lt;/li>
&lt;li>&lt;strong>Vector space&lt;/strong> → where vectors live&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/">
 Mathematical Foundation
&lt;/a>&lt;/p></description></item><item><title>Linear Systems</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/</guid><description>&lt;h1 id="linear-systems">
 Linear Systems
 
 &lt;a class="anchor" href="#linear-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>How systems of linear equations are represented and solved using matrices.&lt;/p>
&lt;ul>
&lt;li>the study of vectors and rules to manipulate
vectors&lt;/li>
&lt;li>describe multiple linear equations solved simultaneously&lt;/li>
&lt;li>connect algebraic equations with matrix representations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;img src="https://arshadhs.github.io/images/ai/matrix_vector_operations.png" alt="Matrix" />&lt;/p>
&lt;hr>
&lt;h2 id="idea-of-closure">
 Idea of Closure
 
 &lt;a class="anchor" href="#idea-of-closure">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>performing a specific operation (like addition or multiplication) on members of a set always produces a result that belongs to the same set&lt;/p>
&lt;/li>
&lt;li>
&lt;p>idea of closure is fundamental to defining a &lt;strong>&lt;a href="https://arshadhs.github.io/docs/ai/linear-algebra/01-linear-systems">Vector space&lt;/a>&lt;/strong> because it ensures that performing arithmetic operations (addition and scalar multiplication) on vectors within a set does not produce a new element outside that set.&lt;/p></description></item><item><title>Systems of Linear Equations</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/</guid><description>&lt;h1 id="systems-of-linear-equations">
 Systems of Linear Equations
 
 &lt;a class="anchor" href="#systems-of-linear-equations">#&lt;/a>
 
&lt;/h1>
&lt;p>A system of linear equations can be written compactly as:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
A\mathbf{x}=\mathbf{b}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>This represents:&lt;/p>
&lt;ul>
&lt;li>a &lt;strong>linear transformation&lt;/strong> applied to an unknown vector (\mathbf{x})&lt;/li>
&lt;li>producing an output vector (\mathbf{b})&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="key-components">
 Key components
 
 &lt;a class="anchor" href="#key-components">#&lt;/a>
 
&lt;/h2>
&lt;h3 id="coefficient-matrix-a">
 Coefficient matrix (A)
 
 &lt;a class="anchor" href="#coefficient-matrix-a">#&lt;/a>
 
&lt;/h3>
&lt;p>(A) contains the coefficients of the variables.&lt;/p></description></item><item><title>Matrices</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/</guid><description>&lt;h1 id="matrices">
 Matrices
 
 &lt;a class="anchor" href="#matrices">#&lt;/a>
 
&lt;/h1>
&lt;p>Matrices are the &lt;strong>core data structure of linear algebra&lt;/strong> and the &lt;strong>workhorse of machine learning&lt;/strong>.&lt;br>
Almost every ML model can be described as a sequence of matrix operations.&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://arshadhs.github.io/docs/ai/maths/linear-algebra/03-matrix-decomposition/special-matrices/">Special Matrices&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="matrix">
 Matrix
 
 &lt;a class="anchor" href="#matrix">#&lt;/a>
 
&lt;/h2>
&lt;p>A &lt;strong>matrix&lt;/strong> is a rectangular array of numbers arranged in &lt;strong>rows and columns&lt;/strong>.&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
A \in \mathbb{R}^{m \times n}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>An ( m \times n ) matrix has:&lt;/p></description></item><item><title>Solving Linear Systems</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/</guid><description>&lt;h1 id="solving-linear-systems">
 Solving Linear Systems
 
 &lt;a class="anchor" href="#solving-linear-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>Solve using:&lt;/p>
&lt;ul>
&lt;li>Substitution Method&lt;/li>
&lt;li>Elimination Method (Multiple &amp;amp; then Subtract)&lt;/li>
&lt;li>Cross Multiplication&lt;/li>
&lt;/ul>
&lt;p>Linear system can have:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>no solution&lt;/strong>&lt;/li>
&lt;li>&lt;strong>a unique solution&lt;/strong>&lt;/li>
&lt;li>&lt;strong>infinitely many solutions&lt;/strong>&lt;/li>
&lt;/ul>
&lt;h2 id="positive-definite-matrices">
 Positive Definite Matrices
 
 &lt;a class="anchor" href="#positive-definite-matrices">#&lt;/a>
 
&lt;/h2>
&lt;p>A square matrix is positive definite if pre-multiplying and post-multiplying it by the same vector always gives a positive number as a result, independently of how we choose the vector.&lt;/p>
&lt;p>Positive definite symmetric matrices have the property that all their eigenvalues are positive.&lt;/p></description></item><item><title>Forward and Backward Substitution</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/</guid><description>&lt;h1 id="forward-and-backward-substitution">
 Forward and Backward Substitution
 
 &lt;a class="anchor" href="#forward-and-backward-substitution">#&lt;/a>
 
&lt;/h1>
&lt;p>Forward and backward substitution are efficient algorithms used to solve linear systems when the coefficient matrix is &lt;strong>triangular&lt;/strong>.&lt;/p>
&lt;p>They are typically used after:&lt;/p>
&lt;ul>
&lt;li>Gaussian elimination&lt;/li>
&lt;li>LU decomposition&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="1-forward-substitution-lower-triangular-systems">
 1. Forward Substitution (Lower Triangular Systems)
 
 &lt;a class="anchor" href="#1-forward-substitution-lower-triangular-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>Used to solve:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
L\mathbf{x} = \mathbf{b}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>where (L) is a &lt;strong>lower triangular matrix&lt;/strong>:&lt;/p></description></item><item><title>Inverse Matrix</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/</guid><description>&lt;h1 id="inverse-matrix">
 Inverse Matrix
 
 &lt;a class="anchor" href="#inverse-matrix">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>inverse of a matrix&lt;/strong> is a matrix that, when multiplied with the original matrix, produces the &lt;strong>identity matrix&lt;/strong>.&lt;/p>
&lt;p>A square matrix (A) is &lt;strong>invertible&lt;/strong> if there exists a matrix (A^{-1}) such that:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
AA^{-1} = A^{-1}A = I
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>Here:&lt;/p></description></item><item><title>Convex Combination</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/convex/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/convex/</guid><description>&lt;h1 id="convex-combination-of-two-points">
 Convex Combination of Two Points
 
 &lt;a class="anchor" href="#convex-combination-of-two-points">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>convex combination&lt;/strong> describes how to form a point between two points using weighted averages.&lt;/p>
&lt;p>It is a fundamental building block in several advanced fields:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Linear Algebra &amp;amp; Geometry&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Optimization Theory&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Machine Learning&lt;/strong> (Specifically in SVMs, clustering, and data interpolation)&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>Given two points (or vectors) $\mathbf{x}_1, \mathbf{x}_2 \in \mathbb{R}^n$, a convex combination of these points is defined as:&lt;/p>
$$\mathbf{x} = \lambda \mathbf{x}_1 + (1 - \lambda)\mathbf{x}_2$$&lt;p>&lt;strong>Where:&lt;/strong>&lt;/p></description></item><item><title>Vector Spaces</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/</guid><description>&lt;h1 id="vector-spaces">
 Vector Spaces
 
 &lt;a class="anchor" href="#vector-spaces">#&lt;/a>
 
&lt;/h1>
&lt;p>A vector space is the mathematical “home” where vectors live and where addition and scaling are valid operations.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A vector space is a set closed under vector addition and scalar multiplication.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Machine learning operates in vector spaces.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>covers independence, bases, rank, and geometric tools like norms and inner products that are used to measure length, distance, and angles.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>A &lt;strong>vector space&lt;/strong> is a set of vectors that follows &lt;strong>ten axioms&lt;/strong>, defined under two operations:&lt;/p></description></item><item><title>Feature Space</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/</guid><description>&lt;h1 id="feature">
 Feature
 
 &lt;a class="anchor" href="#feature">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>feature&lt;/strong> is an individual measurable property or characteristic of a data point used as input to a machine learning model.&lt;/p>
&lt;p>Each feature corresponds to &lt;strong>one dimension&lt;/strong>.&lt;/p>

&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>

 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>

&lt;span>
 \[ 
x_i \in \mathbb{R}
 \]
 &lt;/span>


&lt;p>A data point with ( d ) features is represented as:&lt;/p></description></item><item><title>Cauchy–Schwarz</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/</guid><description>&lt;h1 id="cauchyschwarz-inequality">
 Cauchy–Schwarz Inequality
 
 &lt;a class="anchor" href="#cauchyschwarz-inequality">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>Cauchy–Schwarz Inequality&lt;/strong> is one of the most important results in linear algebra.&lt;/p>
&lt;p>It places a fundamental bound on the inner product of two vectors.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>If you see &lt;strong>angle&lt;/strong>, &lt;strong>cosine&lt;/strong>, &lt;strong>similarity&lt;/strong>, or &lt;strong>inner product bounds&lt;/strong>&lt;br>
→ think &lt;strong>Cauchy–Schwarz Inequality&lt;/strong>&lt;/p>
&lt;p>Key Idea:
The inner product (dot product) can never exceed the product of magnitudes.
This ensures all geometric interpretations (angles, cosine) are valid.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="statement-of-the-inequality">
 Statement of the Inequality
 
 &lt;a class="anchor" href="#statement-of-the-inequality">#&lt;/a>
 
&lt;/h2>
&lt;p>For any vectors:&lt;/p></description></item><item><title>Matrix Decompositions</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/</guid><description>&lt;h1 id="matrix-decompositions">
 Matrix Decompositions
 
 &lt;a class="anchor" href="#matrix-decompositions">#&lt;/a>
 
&lt;/h1>
&lt;p>Decompositions reveal structure in matrices and power algorithms like PCA.&lt;/p>
&lt;p>Matrix decompositions break complex matrices into simpler parts.&lt;/p>
&lt;p>From the lecture introduction, matrices are used to describe mappings and transformations of vectors.&lt;/p>
&lt;p>That is why decomposition is important:
it lets us understand a complicated transformation by rewriting it using simpler building blocks.&lt;/p>
&lt;p>In the slides, the topic is introduced as part of three closely connected goals:
how to summarise matrices,
how matrices can be decomposed,
and how the decompositions can be used for matrix approximations.&lt;/p></description></item><item><title>Characteristic Polynomial</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/</guid><description>&lt;h1 id="characteristic-polynomial">
 Characteristic Polynomial
 
 &lt;a class="anchor" href="#characteristic-polynomial">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>characteristic polynomial&lt;/strong> of a square matrix is the key tool used to compute &lt;strong>eigenvalues&lt;/strong>.&lt;/p>
&lt;p>It connects:&lt;/p>
&lt;ul>
&lt;li>Determinants&lt;/li>
&lt;li>Trace&lt;/li>
&lt;li>Eigenvalues&lt;/li>
&lt;li>Matrix structure&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>Let&lt;br>

&lt;span>
 \( A \in \mathbb{R}^{n \times n} \)
 &lt;/span>

&lt;br>
and 
&lt;span>
 \( \lambda \in \mathbb{R} \)
 &lt;/span>

.&lt;/p>
&lt;p>The &lt;strong>characteristic polynomial&lt;/strong> of (A) is defined as:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
p_A(\lambda) = \det(A - \lambda I)
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>It is a polynomial in 
&lt;span>
 \( \lambda \)
 &lt;/span>

 of degree (n).&lt;/p></description></item><item><title>Determinant and Trace</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/</guid><description>&lt;h1 id="determinant-and-trace">
 Determinant and Trace
 
 &lt;a class="anchor" href="#determinant-and-trace">#&lt;/a>
 
&lt;/h1>
&lt;hr>
&lt;h2 id="minor">
 Minor
 
 &lt;a class="anchor" href="#minor">#&lt;/a>
 
&lt;/h2>
&lt;p>The &lt;strong>minor&lt;/strong> of an element 
&lt;span>
 \( a_{ij} \)
 &lt;/span>

 is the determinant of the smaller square matrix formed by:&lt;/p>
&lt;ul>
&lt;li>removing &lt;strong>row&lt;/strong> 
&lt;span>
 \( i \)
 &lt;/span>

&lt;/li>
&lt;li>removing &lt;strong>column&lt;/strong> 
&lt;span>
 \( j \)
 &lt;/span>

&lt;/li>
&lt;/ul>
&lt;p>The minor is denoted 
&lt;span>
 \( M_{ij} \)
 &lt;/span>

.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Minors are used to compute &lt;strong>cofactors&lt;/strong>, which are used for determinants and inverses (via adjoint/adjugate).&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="cofactor">
 Cofactor
 
 &lt;a class="anchor" href="#cofactor">#&lt;/a>
 
&lt;/h2>
&lt;p>The &lt;strong>cofactor&lt;/strong> of 
&lt;span>
 \( a_{ij} \)
 &lt;/span>

, denoted 
&lt;span>
 \( C_{ij} \)
 &lt;/span>

, is:&lt;/p></description></item><item><title>Eigenvalues and Eigenvectors</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/</guid><description>&lt;h1 id="eigenvalues-and-eigenvectors">
 Eigenvalues and Eigenvectors
 
 &lt;a class="anchor" href="#eigenvalues-and-eigenvectors">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Eigenvalues give scaling.&lt;/li>
&lt;li>Eigenvectors define invariant directions of transformation.&lt;/li>
&lt;/ul>
&lt;p>Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.&lt;/p>
&lt;p>From lectures:
matrix multiplication represents a transformation of space.&lt;br>
Most vectors change direction and magnitude.&lt;br>
Some special vectors only scale.&lt;br>
These are eigenvectors.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
A matrix transformation stretches or compresses vectors.
Eigenvectors are directions that remain unchanged.
Eigenvalues tell how much scaling happens.&lt;/p></description></item><item><title>Cholesky Decomposition</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/</guid><description>&lt;h1 id="cholesky-decomposition">
 Cholesky Decomposition
 
 &lt;a class="anchor" href="#cholesky-decomposition">#&lt;/a>
 
&lt;/h1>
&lt;p>Cholesky decomposition is a special matrix factorisation used for symmetric positive definite matrices.&lt;/p>
&lt;p>From lecture discussions, this decomposition is powerful because it reduces a matrix into a triangular form, making computations easier and more stable.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Cholesky decomposition expresses a matrix as a product of a lower triangular matrix and its transpose.
It is efficient and numerically stable.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>For a symmetric positive definite matrix:&lt;/p></description></item><item><title>Eigen Decomposition</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/</guid><description>&lt;h1 id="eigen-decomposition">
 Eigen Decomposition
 
 &lt;a class="anchor" href="#eigen-decomposition">#&lt;/a>
 
&lt;/h1>
&lt;p>Eigen decomposition expresses a matrix using its eigenvectors and eigenvalues.&lt;/p>
&lt;p>From lecture discussions, this is one of the most important ways to understand the internal structure of a matrix.&lt;/p>
&lt;p>Instead of treating the matrix as a black box, eigen decomposition reveals its fundamental directions and scaling behaviour.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Eigen decomposition rewrites a matrix in terms of directions (eigenvectors) and scaling factors (eigenvalues).
This makes complex transformations easier to understand and compute.&lt;/p></description></item><item><title>Diagonalization</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/</guid><description>&lt;h1 id="diagonalization">
 Diagonalization
 
 &lt;a class="anchor" href="#diagonalization">#&lt;/a>
 
&lt;/h1>
&lt;p>Diagonalisation expresses a matrix using its eigenvectors and eigenvalues when possible.&lt;/p>
&lt;p>From lecture explanation, diagonalisation is one of the most powerful tools because it converts a complicated matrix into a much simpler form.&lt;/p>
&lt;p>Instead of working with a full matrix, we work with a diagonal matrix, which is much easier to analyse and compute.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
If a matrix has enough independent eigenvectors, it can be rewritten as a diagonal matrix using a change of basis.
This simplifies matrix operations significantly.&lt;/p></description></item><item><title>Singular Value Decomposition (SVD)</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/</guid><description>&lt;h1 id="singular-value-decomposition-svd">
 Singular Value Decomposition (SVD)
 
 &lt;a class="anchor" href="#singular-value-decomposition-svd">#&lt;/a>
 
&lt;/h1>
&lt;p>Singular Value Decomposition (SVD) is one of the most important matrix decomposition techniques in linear algebra and machine learning.&lt;/p>
&lt;p>It factorises any matrix into three simpler matrices that reveal its structure.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
SVD decomposes a matrix into rotations + scaling.
It tells us how data is transformed along orthogonal directions.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h1 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h1>
&lt;p>For any matrix in real space:

&lt;span style="color: green;">
 &lt;span>
 \[ 
A \in \mathbb{R}^{m \times n}
 \]
 &lt;/span>

&lt;/span>&lt;/p></description></item><item><title>Matrix Approximation</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/</guid><description>&lt;h1 id="matrix-approximation">
 Matrix Approximation
 
 &lt;a class="anchor" href="#matrix-approximation">#&lt;/a>
 
&lt;/h1>
&lt;p>Low-rank approximation keeps the most important structure while reducing noise and computation.&lt;/p>
&lt;hr>
&lt;h2 id="low-rank-approximation">
 Low-Rank Approximation
 
 &lt;a class="anchor" href="#low-rank-approximation">#&lt;/a>
 
&lt;/h2>
&lt;p>Used for:&lt;/p>
&lt;ul>
&lt;li>Dimensionality reduction&lt;/li>
&lt;li>Noise removal&lt;/li>
&lt;li>Efficient computation&lt;/li>
&lt;/ul>
&lt;p>Forms the basis of &lt;strong>PCA&lt;/strong>.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/">
 Matrix Decompositions
&lt;/a>&lt;/p></description></item><item><title>Continuous Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/</guid><description>&lt;h1 id="continuous-optimisation">
 Continuous Optimisation
 
 &lt;a class="anchor" href="#continuous-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Optimisation finds parameters that minimise (or maximise) an objective function.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/">Optimisation using Gradient Descent&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/">Constrained Optimisation&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/">Lagrange Multipliers&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/">Convex Optimisation&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/">
 Calculus
&lt;/a>&lt;/p></description></item><item><title>Optimisation using Gradient Descent</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/</guid><description>&lt;h1 id="optimisation-using-gradient-descent">
 Optimisation using Gradient Descent
 
 &lt;a class="anchor" href="#optimisation-using-gradient-descent">#&lt;/a>
 
&lt;/h1>
&lt;p>Gradient descent is an optimisation algorithm used to train ML and neural networks.&lt;/p>
&lt;ul>
&lt;li>Gradient descent updates parameters by moving opposite the gradient.&lt;/li>
&lt;/ul>
&lt;p>Trains ML models by minimising errors:&lt;/p>
&lt;ul>
&lt;li>between predicted and actual results&lt;/li>
&lt;li>by iteratively adjusting its parameters&lt;/li>
&lt;li>moves step‑by‑step in the direction of the steepest decrease in the loss function, it helps ML models learn the best possible weights for better predictions&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="types-of-gradient-gescent-learning-algorithms">
 Types of Gradient Gescent learning algorithms
 
 &lt;a class="anchor" href="#types-of-gradient-gescent-learning-algorithms">#&lt;/a>
 
&lt;/h2>
&lt;ol>
&lt;li>Batch gradient descent&lt;/li>
&lt;li>Stochastic gradient descent&lt;/li>
&lt;li>Mini-batch gradient descent&lt;/li>
&lt;/ol>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Constrained Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/</guid><description>&lt;h1 id="constrained-optimisation">
 Constrained Optimisation
 
 &lt;a class="anchor" href="#constrained-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Optimisation with constraints (equalities/inequalities).&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Lagrange Multipliers</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/</guid><description>&lt;h1 id="lagrange-multipliers">
 Lagrange Multipliers
 
 &lt;a class="anchor" href="#lagrange-multipliers">#&lt;/a>
 
&lt;/h1>
&lt;p>Transforms constrained problems into unconstrained ones using Lagrangians.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Convex Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/</guid><description>&lt;h1 id="convex-optimisation">
 Convex Optimisation
 
 &lt;a class="anchor" href="#convex-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Convex objectives have a single global minimum, making optimisation reliable.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Nonlinear Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/</guid><description>&lt;h1 id="nonlinear-optimisation-in-machine-learning">
 Nonlinear Optimisation in Machine Learning
 
 &lt;a class="anchor" href="#nonlinear-optimisation-in-machine-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Practical training challenges and modern optimisers used in ML.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/">Challenges in Gradient-Based Optimisation&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/">Stochastic Gradient Descent (SGD)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/">Momentum-Based Learning&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/">Adaptive Methods: AdaGrad, RMSProp, Adam&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/">Tuning Hyperparameters and Preprocessing&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/">
 Calculus
&lt;/a>&lt;/p></description></item><item><title>Challenges in Gradient-Based Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/</guid><description>&lt;h1 id="challenges-in-gradient-based-optimisation">
 Challenges in Gradient-Based Optimisation
 
 &lt;a class="anchor" href="#challenges-in-gradient-based-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Local optima and flat regions&lt;/li>
&lt;li>Differential curvature&lt;/li>
&lt;li>Difficult topologies (cliffs and valleys)&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Stochastic Gradient Descent (SGD)</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/</guid><description>&lt;h1 id="stochastic-gradient-descent-sgd">
 Stochastic Gradient Descent (SGD)
 
 &lt;a class="anchor" href="#stochastic-gradient-descent-sgd">#&lt;/a>
 
&lt;/h1>
&lt;p>SGD uses mini-batches to trade exact gradients for speed and generalisation.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Momentum-Based Learning</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/</guid><description>&lt;h1 id="momentum-based-learning">
 Momentum-Based Learning
 
 &lt;a class="anchor" href="#momentum-based-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Momentum smooths updates and helps traverse valleys efficiently.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Adaptive Methods: AdaGrad, RMSProp, Adam</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/</guid><description>&lt;h1 id="adaptive-methods-adagrad-rmsprop-adam">
 Adaptive Methods: AdaGrad, RMSProp, Adam
 
 &lt;a class="anchor" href="#adaptive-methods-adagrad-rmsprop-adam">#&lt;/a>
 
&lt;/h1>
&lt;p>Adaptive methods adjust learning rates per-parameter.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Tuning Hyperparameters and Preprocessing</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/</guid><description>&lt;h1 id="tuning-hyperparameters-and-preprocessing">
 Tuning Hyperparameters and Preprocessing
 
 &lt;a class="anchor" href="#tuning-hyperparameters-and-preprocessing">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Learning rate schedules&lt;/li>
&lt;li>Initialisation&lt;/li>
&lt;li>Tuning hyperparameters&lt;/li>
&lt;li>Importance of feature preprocessing&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Dimensionality reduction and PCA</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/</guid><description>&lt;h1 id="dimensionality-reduction-and-pca">
 Dimensionality reduction and PCA
 
 &lt;a class="anchor" href="#dimensionality-reduction-and-pca">#&lt;/a>
 
&lt;/h1>
&lt;p>PCA and SVM connect linear algebra, geometry, and optimisation.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/">Principal Component Analysis (PCA)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/">PCA Theory&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/">PCA in Practice&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/">Latent Variable Perspective&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/">Mathematical Preliminaries of SVM&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/">Nonlinear SVM and Kernels&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/">
 Linear Algebra
&lt;/a>&lt;/p></description></item><item><title>Principal Component Analysis (PCA)</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/</guid><description>&lt;h1 id="principal-component-analysis-pca">
 Principal Component Analysis (PCA)
 
 &lt;a class="anchor" href="#principal-component-analysis-pca">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>dimensionality reduction technique&lt;/li>
&lt;li>helps us to &lt;strong>reduce the number of features&lt;/strong> in a dataset while keeping the most important information.&lt;/li>
&lt;li>changes complex datasets by transforming correlated features into a smaller set of uncorrelated components.&lt;/li>
&lt;li>uses &lt;strong>linear algebra&lt;/strong> to transform data into &lt;strong>new features&lt;/strong> called principal components.&lt;/li>
&lt;li>finds these by calculating &lt;strong>eigenvectors (directions)&lt;/strong> and &lt;strong>eigenvalues (importance)&lt;/strong> from the &lt;strong>covariance matrix&lt;/strong>.&lt;/li>
&lt;li>PCA &lt;strong>selects the top components with the highest eigenvalues&lt;/strong> and &lt;strong>projects the data onto them simplify the dataset&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;blockquote class="book-hint default">
&lt;p>PCA prioritizes the directions where the data varies the most because more variation = more useful information.&lt;/p></description></item><item><title>PCA Theory</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/</guid><description>&lt;h1 id="pca-theory">
 PCA Theory
 
 &lt;a class="anchor" href="#pca-theory">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Problem setting&lt;/li>
&lt;li>Maximum variance perspective&lt;/li>
&lt;li>Projection perspective&lt;/li>
&lt;li>Eigenvector and low-rank approximations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>PCA in Practice</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/</guid><description>&lt;h1 id="pca-in-practice">
 PCA in Practice
 
 &lt;a class="anchor" href="#pca-in-practice">#&lt;/a>
 
&lt;/h1>
&lt;p>Key steps of PCA in practice, including considerations in high dimensions.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Latent Variable Perspective</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/</guid><description>&lt;h1 id="latent-variable-perspective">
 Latent Variable Perspective
 
 &lt;a class="anchor" href="#latent-variable-perspective">#&lt;/a>
 
&lt;/h1>
&lt;p>PCA can be interpreted as modelling data using a smaller number of latent variables.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Mathematical Preliminaries of SVM</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/</guid><description>&lt;h1 id="mathematical-preliminaries-of-svm">
 Mathematical Preliminaries of SVM
 
 &lt;a class="anchor" href="#mathematical-preliminaries-of-svm">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Primal and dual perspectives&lt;/li>
&lt;li>Geometry of margins&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Nonlinear SVM and Kernels</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/</guid><description>&lt;h1 id="nonlinear-svm-and-kernels">
 Nonlinear SVM and Kernels
 
 &lt;a class="anchor" href="#nonlinear-svm-and-kernels">#&lt;/a>
 
&lt;/h1>
&lt;p>Kernels allow inner products in high-dimensional feature spaces without explicit mapping.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item></channel></rss>