<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Machine Learning on Arshad Siddiqui</title><link>https://arshadhs.github.io/tags/machine-learning/</link><description>Recent content in Machine Learning on Arshad Siddiqui</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Tue, 21 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://arshadhs.github.io/tags/machine-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>Differentiation of Univariate Functions</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/010-univariate-differentiation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/010-univariate-differentiation/</guid><description>&lt;h1 id="differentiation-of-univariate-functions">
 Differentiation of Univariate Functions
 
 &lt;a class="anchor" href="#differentiation-of-univariate-functions">#&lt;/a>
 
&lt;/h1>
&lt;p>Differentiation measures rate of change.&lt;/p>
&lt;p>For a function f(x), the derivative measures the rate of change.&lt;/p>
&lt;span style="color: red;">
 $[
f'(x) = $lim_{h $to 0} $frac{f(x+h)-f(x)}{h}
$]
&lt;/span>
&lt;p>Interpretation:&lt;/p>
&lt;ul>
&lt;li>Slope of tangent&lt;/li>
&lt;li>Instantaneous rate of change&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Partial Differentiation and Gradients</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/020-partial-derivatives-and-gradients/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/020-partial-derivatives-and-gradients/</guid><description>&lt;h1 id="partial-differentiation-and-gradients">
 Partial Differentiation and Gradients
 
 &lt;a class="anchor" href="#partial-differentiation-and-gradients">#&lt;/a>
 
&lt;/h1>
&lt;p>For f(x1, x2, &amp;hellip;, xn):&lt;/p>
&lt;span style="color: red;">
 [
\frac{\partial f}{\partial x_i}
]
&lt;/span>
&lt;p>Gradient vector:&lt;/p>
&lt;span style="color: red;">
 [
\nabla f =
\begin{bmatrix}
\frac{\partial f}{\partial x_1} \
\vdots \
\frac{\partial f}{\partial x_n}
\end{bmatrix}
]
&lt;/span>
&lt;p>Gradient points in direction of steepest ascent.&lt;/p>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart LR
 Input --&amp;gt; Function
 Function --&amp;gt; Gradient
 Gradient --&amp;gt; Optimisation
&lt;/pre>

&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Linear Independence</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/</guid><description>&lt;h1 id="linear-independence">
 Linear Independence
 
 &lt;a class="anchor" href="#linear-independence">#&lt;/a>
 
&lt;/h1>
&lt;p>A set of vectors is &lt;strong>linearly independent&lt;/strong> if none of them can be written as a linear combination of the others.&lt;/p>

&lt;span style="color: green;">
 &lt;span>
 \[ 
c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0}
\;\Rightarrow\;
c_1=\cdots=c_k=0
 \]
 &lt;/span>

&lt;/span>
&lt;p>Independence means each vector adds &lt;strong>new information&lt;/strong>.&lt;/p>
&lt;h2 id="why-it-matters">
 Why it matters
 
 &lt;a class="anchor" href="#why-it-matters">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Detects redundancy&lt;/li>
&lt;li>Connects to rank and basis&lt;/li>
&lt;/ul>
&lt;p>If one vector can already be formed using others, it does not add anything new.&lt;/p></description></item><item><title>Gradients of Vector-Valued and Matrix Functions</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/030-vector-and-matrix-gradients/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/030-vector-and-matrix-gradients/</guid><description>&lt;h1 id="gradients-of-vector-valued-and-matrix-functions">
 Gradients of Vector-Valued and Matrix Functions
 
 &lt;a class="anchor" href="#gradients-of-vector-valued-and-matrix-functions">#&lt;/a>
 
&lt;/h1>
&lt;p>Covers gradients when outputs or parameters are vectors/matrices.&lt;/p>
&lt;p>If f: R^n -&amp;gt; R^m, the derivative is the Jacobian.&lt;/p>
&lt;span style="color: red;">
 [
J =
\begin{bmatrix}
\frac{\partial f_1}{\partial x_1} &amp;amp; \dots &amp;amp; \frac{\partial f_1}{\partial x_n} \
\vdots &amp;amp; \ddots &amp;amp; \vdots \
\frac{\partial f_m}{\partial x_1} &amp;amp; \dots &amp;amp; \frac{\partial f_m}{\partial x_n}
\end{bmatrix}
]
&lt;/span>
&lt;p>For scalar f(x):&lt;/p>
&lt;span style="color: red;">
 [
H = \nabla^2 f
]
&lt;/span>
&lt;p>Hessian captures curvature.&lt;/p></description></item><item><title>Reinforcement Learning</title><link>https://arshadhs.github.io/docs/ai/machine-learning/ml-reinforcement/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/machine-learning/ml-reinforcement/</guid><description>&lt;h1 id="reinforcement-learning-rl">
 Reinforcement Learning (RL)
 
 &lt;a class="anchor" href="#reinforcement-learning-rl">#&lt;/a>
 
&lt;/h1>
&lt;p>RL is learning by &lt;strong>trial and error&lt;/strong>.&lt;/p>
&lt;p>Reinforcement Learning (RL) is a type of machine learning where an &lt;strong>autonomous agent learns to make decisions by interacting with an environment&lt;/strong>.&lt;/p>
&lt;p>Instead of being told the correct answer, the agent:&lt;/p>
&lt;ul>
&lt;li>takes actions&lt;/li>
&lt;li>observes outcomes&lt;/li>
&lt;li>receives rewards or penalties&lt;/li>
&lt;li>gradually learns a strategy that maximises long-term reward&lt;/li>
&lt;/ul>

&lt;blockquote class='book-hint '>
 &lt;p>&lt;strong>Reinforcement Learning teaches an agent how to act, not what to predict.&lt;/strong>&lt;/p></description></item><item><title>Useful Gradient Identities</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/050-gradient-identities/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/050-gradient-identities/</guid><description>&lt;h1 id="useful-gradient-identities">
 Useful Gradient Identities
 
 &lt;a class="anchor" href="#useful-gradient-identities">#&lt;/a>
 
&lt;/h1>
&lt;span style="color: red;">
 [
\nabla (a^T x) = a
]
&lt;/span>
&lt;span style="color: red;">
 [
\nabla (x^T A x) = (A + A^T)x
]
&lt;/span>
&lt;p>If A symmetric:&lt;/p>
&lt;span style="color: red;">
 [
\nabla (x^T A x) = 2Ax
]
&lt;/span>
&lt;p>These are heavily used in &lt;strong>optimisation&lt;/strong>.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Inner Products and Dot Product</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/</guid><description>&lt;h1 id="inner-products-and-dot-product">
 Inner Products and Dot Product
 
 &lt;a class="anchor" href="#inner-products-and-dot-product">#&lt;/a>
 
&lt;/h1>
&lt;p>An &lt;strong>inner product&lt;/strong> maps two vectors to a &lt;strong>single scalar&lt;/strong>.&lt;/p>
&lt;p>It allows us to measure:&lt;/p>
&lt;ul>
&lt;li>similarity&lt;/li>
&lt;li>vector length&lt;/li>
&lt;li>projections&lt;/li>
&lt;li>orthogonality&lt;/li>
&lt;/ul>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart TD
T[&amp;#34;Inner&amp;lt;br/&amp;gt;products&amp;lt;br/&amp;gt;(types)&amp;#34;] --&amp;gt; DOT[&amp;#34;Euclidean&amp;lt;br/&amp;gt;Dot product&amp;#34;]
T --&amp;gt; WIP[&amp;#34;Weighted&amp;lt;br/&amp;gt;inner product&amp;#34;]
T --&amp;gt; FN[&amp;#34;Function-space&amp;lt;br/&amp;gt;(integral)&amp;#34;]
T --&amp;gt; HERM[&amp;#34;Complex&amp;lt;br/&amp;gt;Hermitian&amp;#34;]
T --&amp;gt; MAT[&amp;#34;Matrix&amp;lt;br/&amp;gt;inner product&amp;lt;br/&amp;gt;(Frobenius)&amp;#34;]

DOT --&amp;gt; Rn[&amp;#34;Vectors in&amp;lt;br/&amp;gt;
&amp;lt;span&amp;gt;
 \( \mathbb{R}^n \)
 &amp;lt;/span&amp;gt;

&amp;#34;]
WIP --&amp;gt; SPD[&amp;#34;SPD matrix&amp;lt;br/&amp;gt;W&amp;#34;]
FN --&amp;gt; L2[&amp;#34;L2 space&amp;lt;br/&amp;gt;functions&amp;#34;]
HERM --&amp;gt; Cn[&amp;#34;Vectors in&amp;lt;br/&amp;gt;C^n&amp;#34;]
MAT --&amp;gt; Mnm[&amp;#34;Matrices&amp;lt;br/&amp;gt;R^{m×n}&amp;#34;]

style T fill:#90CAF9,stroke:#1E88E5,color:#000

style DOT fill:#C8E6C9,stroke:#2E7D32,color:#000
style WIP fill:#C8E6C9,stroke:#2E7D32,color:#000
style FN fill:#C8E6C9,stroke:#2E7D32,color:#000
style HERM fill:#C8E6C9,stroke:#2E7D32,color:#000
style MAT fill:#C8E6C9,stroke:#2E7D32,color:#000

style Rn fill:#CE93D8,stroke:#8E24AA,color:#000
style SPD fill:#CE93D8,stroke:#8E24AA,color:#000
style L2 fill:#CE93D8,stroke:#8E24AA,color:#000
style Cn fill:#CE93D8,stroke:#8E24AA,color:#000
style Mnm fill:#CE93D8,stroke:#8E24AA,color:#000
&lt;/pre>

&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>For vectors&lt;br>

&lt;span>
 \( \mathbf{a}, \mathbf{b} \in \mathbb{R}^n \)
 &lt;/span>

&lt;/p></description></item><item><title>Backpropagation and Automatic Differentiation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/060-backpropagation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/060-backpropagation/</guid><description>&lt;h1 id="backpropagation-and-automatic-differentiation">
 Backpropagation and Automatic Differentiation
 
 &lt;a class="anchor" href="#backpropagation-and-automatic-differentiation">#&lt;/a>
 
&lt;/h1>
&lt;p>Backpropagation applies the chain rule:&lt;/p>
&lt;ul>
&lt;li>efficiently across a computational graph.&lt;/li>
&lt;li>repeatedly.&lt;/li>
&lt;/ul>
&lt;p>Chain rule:&lt;/p>
&lt;span style="color: red;">
 [
\frac{dL}{dx} = \frac{dL}{dy} \cdot \frac{dy}{dx}
]
&lt;/span>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart LR
 x --&amp;gt; y
 y --&amp;gt; L
&lt;/pre>

&lt;p>Automatic differentiation computes exact derivatives efficiently using computational graphs.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Higher-order derivatives</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/070-higher-order-derivatives/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/070-higher-order-derivatives/</guid><description>&lt;h1 id="higher-order-derivatives">
 Higher-order derivatives
 
 &lt;a class="anchor" href="#higher-order-derivatives">#&lt;/a>
 
&lt;/h1>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Taylor’s series</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/080-taylors-series/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/080-taylors-series/</guid><description>&lt;h1 id="linearization-and-multivariate-taylors-series">
 Linearization and multivariate Taylor’s series
 
 &lt;a class="anchor" href="#linearization-and-multivariate-taylors-series">#&lt;/a>
 
&lt;/h1>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Maxima and Minima</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/090-maxima-and-minima/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/090-maxima-and-minima/</guid><description>&lt;h1 id="computing-maxima-and-minima-for-unconstrained-optimization">
 Computing maxima and minima for unconstrained optimization
 
 &lt;a class="anchor" href="#computing-maxima-and-minima-for-unconstrained-optimization">#&lt;/a>
 
&lt;/h1>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/">
 Vector Calculus
&lt;/a>&lt;/p></description></item><item><title>Naïve Bayes</title><link>https://arshadhs.github.io/docs/ai/statistics/conditional-probability/023_naive_bayes/</link><pubDate>Thu, 12 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/statistics/conditional-probability/023_naive_bayes/</guid><description>&lt;h1 id="naïve-bayes">
 Naïve Bayes
 
 &lt;a class="anchor" href="#na%c3%afve-bayes">#&lt;/a>
 
&lt;/h1>
&lt;p>Naïve Bayes is a &lt;strong>probabilistic classifier&lt;/strong>.&lt;/p>
&lt;ul>
&lt;li>Supervised Learning Problem&lt;/li>
&lt;li>Binary Classification - final target variable is considered in two classes&lt;/li>
&lt;li>Hypothesis is target which you want to classify&lt;/li>
&lt;li>Total Probability (Prior) of Yes and No is already calculated&lt;/li>
&lt;li>Post / Posterior is when you start studying data&lt;/li>
&lt;li>Based on max probability of hypotheses classify given instance into a class&lt;/li>
&lt;/ul>
&lt;p>It predicts a class label by computing:&lt;/p></description></item><item><title>Mathematical Foundation</title><link>https://arshadhs.github.io/docs/ai/maths/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/</guid><description>&lt;h1 id="mathematical-foundations-for-machine-learning">
 Mathematical Foundations for Machine Learning
 
 &lt;a class="anchor" href="#mathematical-foundations-for-machine-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Machine Learning is built on &lt;strong>mathematical principles&lt;/strong> that allow models to:&lt;/p>
&lt;ul>
&lt;li>represent data&lt;/li>
&lt;li>learn patterns&lt;/li>
&lt;li>optimise performance&lt;/li>
&lt;/ul>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart LR
 DATA[Data]
 MATH[Math Models]
 OPT[Optimisation]
 MODEL[Trained Model]

 DATA --&amp;gt; MATH
 MATH --&amp;gt; OPT
 OPT --&amp;gt; MODEL
&lt;/pre>

&lt;p>ML requires &lt;strong>core mathematical tools&lt;/strong> to understand how ML algorithms work internally. Algebra deals with relationships between variables and quantities, while Calculus focuses on change and optimization.&lt;/p></description></item><item><title>Linear Algebra</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/</guid><description>&lt;h1 id="linear-algebra">
 Linear Algebra
 
 &lt;a class="anchor" href="#linear-algebra">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>study of vectors and matrices&lt;/strong> is called Linear Algebra.&lt;/p>
&lt;p>Linear Algebra provides the &lt;strong>mathematical language&lt;/strong> used &lt;strong>to represent data, transformations, and structure&lt;/strong> in ML.&lt;/p>




&lt;ul>
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/">Linear Systems&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/">Systems of Linear Equations&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/">Matrices&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/matrix-transposition/">Matrix Transposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/">Solving Linear Systems&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/">Forward and Backward Substitution&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/">Inverse Matrix&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/convex/">Convex Combination&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/">Vector Spaces&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/020-basis-and-rank/">Basis and Rank&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/010-linear-independence/">Linear Independence&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/030-norm/">Norm&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/040-inner-products/">Inner Products and Dot Product&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/050-lengths-and-distances/">Lengths and Distances&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/060-angles-and-orthogonality/">Angles and Orthogonality&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/070-orthonormal-basis/">Orthonormal Basis&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/">Feature Space&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/">Cauchy–Schwarz&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/">Matrix Decompositions&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/special-matrices/">Special Matrices&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/">Characteristic Polynomial&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/">Determinant and Trace&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/">Eigenvalues and Eigenvectors&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/">Cholesky Decomposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/">Eigen Decomposition&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/">Diagonalization&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/">Singular Value Decomposition (SVD)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/">Matrix Approximation&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">Dimensionality reduction and PCA&lt;/a>

 
 



&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/">Principal Component Analysis (PCA)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/">PCA Theory&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/">PCA in Practice&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/">Latent Variable Perspective&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/">Mathematical Preliminaries of SVM&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/">Nonlinear SVM and Kernels&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>

 &lt;/li>
 
&lt;/ul>


&lt;hr>
&lt;h2 id="why-linear-algebra-matters-in-ml">
 Why Linear Algebra Matters in ML
 
 &lt;a class="anchor" href="#why-linear-algebra-matters-in-ml">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Every machine learning model uses matrices&lt;/li>
&lt;li>All data in ML is represented using &lt;strong>vectors and matrices&lt;/strong>&lt;/li>
&lt;li>Neural networks are pipelines of matrix operations&lt;/li>
&lt;li>Models apply &lt;strong>matrix transformations&lt;/strong> to data&lt;/li>
&lt;li>Optimisation relies on linear algebra operations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="what-to-learn">
 What to Learn
 
 &lt;a class="anchor" href="#what-to-learn">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>Scalars, vectors, and matrices&lt;/li>
&lt;li>Vector operations (addition, dot product)&lt;/li>
&lt;li>Matrix multiplication &lt;em>(critical)&lt;/em>&lt;/li>
&lt;li>Identity matrices and transpose&lt;/li>
&lt;li>Eigenvalues and eigenvectors &lt;em>(conceptual understanding)&lt;/em>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;ul>
&lt;li>&lt;strong>Scalar&lt;/strong> → a number&lt;/li>
&lt;li>&lt;strong>Vector&lt;/strong> → a directed point&lt;/li>
&lt;li>&lt;strong>Matrix&lt;/strong> → a space transformer&lt;/li>
&lt;li>&lt;strong>Linear transformation&lt;/strong> → structured mapping&lt;/li>
&lt;li>&lt;strong>Feature&lt;/strong> → one axis&lt;/li>
&lt;li>&lt;strong>Feature space&lt;/strong> → where data lives&lt;/li>
&lt;li>&lt;strong>Vector space&lt;/strong> → where vectors live&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/">
 Mathematical Foundation
&lt;/a>&lt;/p></description></item><item><title>Linear Systems</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/</guid><description>&lt;h1 id="linear-systems">
 Linear Systems
 
 &lt;a class="anchor" href="#linear-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>How systems of linear equations are represented and solved using matrices.&lt;/p>
&lt;ul>
&lt;li>the study of vectors and rules to manipulate
vectors&lt;/li>
&lt;li>describe multiple linear equations solved simultaneously&lt;/li>
&lt;li>connect algebraic equations with matrix representations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;img src="https://arshadhs.github.io/images/ai/matrix_vector_operations.png" alt="Matrix" />&lt;/p>
&lt;hr>
&lt;h2 id="idea-of-closure">
 Idea of Closure
 
 &lt;a class="anchor" href="#idea-of-closure">#&lt;/a>
 
&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>performing a specific operation (like addition or multiplication) on members of a set always produces a result that belongs to the same set&lt;/p>
&lt;/li>
&lt;li>
&lt;p>idea of closure is fundamental to defining a &lt;strong>&lt;a href="https://arshadhs.github.io/docs/ai/linear-algebra/01-linear-systems">Vector space&lt;/a>&lt;/strong> because it ensures that performing arithmetic operations (addition and scalar multiplication) on vectors within a set does not produce a new element outside that set.&lt;/p></description></item><item><title>Systems of Linear Equations</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/010-systems-of-linear-equations/</guid><description>&lt;h1 id="systems-of-linear-equations">
 Systems of Linear Equations
 
 &lt;a class="anchor" href="#systems-of-linear-equations">#&lt;/a>
 
&lt;/h1>
&lt;p>A system of linear equations can be written compactly as:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
A\mathbf{x}=\mathbf{b}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>This represents:&lt;/p>
&lt;ul>
&lt;li>a &lt;strong>linear transformation&lt;/strong> applied to an unknown vector (\mathbf{x})&lt;/li>
&lt;li>producing an output vector (\mathbf{b})&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="key-components">
 Key components
 
 &lt;a class="anchor" href="#key-components">#&lt;/a>
 
&lt;/h2>
&lt;h3 id="coefficient-matrix-a">
 Coefficient matrix (A)
 
 &lt;a class="anchor" href="#coefficient-matrix-a">#&lt;/a>
 
&lt;/h3>
&lt;p>(A) contains the coefficients of the variables.&lt;/p></description></item><item><title>Matrices</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/020-matrices/</guid><description>&lt;h1 id="matrices">
 Matrices
 
 &lt;a class="anchor" href="#matrices">#&lt;/a>
 
&lt;/h1>
&lt;p>Matrices are the &lt;strong>core data structure of linear algebra&lt;/strong> and the &lt;strong>workhorse of machine learning&lt;/strong>.&lt;br>
Almost every ML model can be described as a sequence of matrix operations.&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://arshadhs.github.io/docs/ai/maths/linear-algebra/03-matrix-decomposition/special-matrices/">Special Matrices&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="matrix">
 Matrix
 
 &lt;a class="anchor" href="#matrix">#&lt;/a>
 
&lt;/h2>
&lt;p>A &lt;strong>matrix&lt;/strong> is a rectangular array of numbers arranged in &lt;strong>rows and columns&lt;/strong>.&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
A \in \mathbb{R}^{m \times n}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>An ( m \times n ) matrix has:&lt;/p></description></item><item><title>Solving Linear Systems</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/030-solving-linear-systems/</guid><description>&lt;h1 id="solving-linear-systems">
 Solving Linear Systems
 
 &lt;a class="anchor" href="#solving-linear-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>Solve using:&lt;/p>
&lt;ul>
&lt;li>Substitution Method&lt;/li>
&lt;li>Elimination Method (Multiple &amp;amp; then Subtract)&lt;/li>
&lt;li>Cross Multiplication&lt;/li>
&lt;/ul>
&lt;p>Linear system can have:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>no solution&lt;/strong>&lt;/li>
&lt;li>&lt;strong>a unique solution&lt;/strong>&lt;/li>
&lt;li>&lt;strong>infinitely many solutions&lt;/strong>&lt;/li>
&lt;/ul>
&lt;h2 id="positive-definite-matrices">
 Positive Definite Matrices
 
 &lt;a class="anchor" href="#positive-definite-matrices">#&lt;/a>
 
&lt;/h2>
&lt;p>A square matrix is positive definite if pre-multiplying and post-multiplying it by the same vector always gives a positive number as a result, independently of how we choose the vector.&lt;/p>
&lt;p>Positive definite symmetric matrices have the property that all their eigenvalues are positive.&lt;/p></description></item><item><title>Forward and Backward Substitution</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/forward-backward/</guid><description>&lt;h1 id="forward-and-backward-substitution">
 Forward and Backward Substitution
 
 &lt;a class="anchor" href="#forward-and-backward-substitution">#&lt;/a>
 
&lt;/h1>
&lt;p>Forward and backward substitution are efficient algorithms used to solve linear systems when the coefficient matrix is &lt;strong>triangular&lt;/strong>.&lt;/p>
&lt;p>They are typically used after:&lt;/p>
&lt;ul>
&lt;li>Gaussian elimination&lt;/li>
&lt;li>LU decomposition&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="1-forward-substitution-lower-triangular-systems">
 1. Forward Substitution (Lower Triangular Systems)
 
 &lt;a class="anchor" href="#1-forward-substitution-lower-triangular-systems">#&lt;/a>
 
&lt;/h1>
&lt;p>Used to solve:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
L\mathbf{x} = \mathbf{b}
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>where (L) is a &lt;strong>lower triangular matrix&lt;/strong>:&lt;/p></description></item><item><title>Inverse Matrix</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/01-linear-systems/inverse-matrix/</guid><description>&lt;h1 id="inverse-matrix">
 Inverse Matrix
 
 &lt;a class="anchor" href="#inverse-matrix">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>inverse of a matrix&lt;/strong> is a matrix that, when multiplied with the original matrix, produces the &lt;strong>identity matrix&lt;/strong>.&lt;/p>
&lt;p>A square matrix (A) is &lt;strong>invertible&lt;/strong> if there exists a matrix (A^{-1}) such that:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
AA^{-1} = A^{-1}A = I
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>Here:&lt;/p></description></item><item><title>Vector Spaces</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/</guid><description>&lt;h1 id="vector-spaces">
 Vector Spaces
 
 &lt;a class="anchor" href="#vector-spaces">#&lt;/a>
 
&lt;/h1>
&lt;p>A vector space is the mathematical “home” where vectors live and where addition and scaling are valid operations.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A vector space is a set closed under vector addition and scalar multiplication.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Machine learning operates in vector spaces.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>covers independence, bases, rank, and geometric tools like norms and inner products that are used to measure length, distance, and angles.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>A &lt;strong>vector space&lt;/strong> is a set of vectors that follows &lt;strong>ten axioms&lt;/strong>, defined under two operations:&lt;/p></description></item><item><title>Feature Space</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/feature-space/</guid><description>&lt;h1 id="feature">
 Feature
 
 &lt;a class="anchor" href="#feature">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>feature&lt;/strong> is an individual measurable property or characteristic of a data point used as input to a machine learning model.&lt;/p>
&lt;p>Each feature corresponds to &lt;strong>one dimension&lt;/strong>.&lt;/p>

&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>

 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>

&lt;span>
 \[ 
x_i \in \mathbb{R}
 \]
 &lt;/span>


&lt;p>A data point with ( d ) features is represented as:&lt;/p></description></item><item><title>Cauchy–Schwarz</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/02-vector-spaces/cauchyschwarz/</guid><description>&lt;h1 id="cauchyschwarz-inequality">
 Cauchy–Schwarz Inequality
 
 &lt;a class="anchor" href="#cauchyschwarz-inequality">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>Cauchy–Schwarz Inequality&lt;/strong> is one of the most important results in linear algebra.&lt;/p>
&lt;p>It places a fundamental bound on the inner product of two vectors.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>If you see &lt;strong>angle&lt;/strong>, &lt;strong>cosine&lt;/strong>, &lt;strong>similarity&lt;/strong>, or &lt;strong>inner product bounds&lt;/strong>&lt;br>
→ think &lt;strong>Cauchy–Schwarz Inequality&lt;/strong>&lt;/p>
&lt;p>Key Idea:
The inner product (dot product) can never exceed the product of magnitudes.
This ensures all geometric interpretations (angles, cosine) are valid.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="statement-of-the-inequality">
 Statement of the Inequality
 
 &lt;a class="anchor" href="#statement-of-the-inequality">#&lt;/a>
 
&lt;/h2>
&lt;p>For any vectors:&lt;/p></description></item><item><title>Matrix Decompositions</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/</guid><description>&lt;h1 id="matrix-decompositions">
 Matrix Decompositions
 
 &lt;a class="anchor" href="#matrix-decompositions">#&lt;/a>
 
&lt;/h1>
&lt;p>Decompositions reveal structure in matrices and power algorithms like PCA.&lt;/p>
&lt;p>Matrix decompositions break complex matrices into simpler parts.&lt;/p>
&lt;p>From the lecture introduction, matrices are used to describe mappings and transformations of vectors.&lt;/p>
&lt;p>That is why decomposition is important:
it lets us understand a complicated transformation by rewriting it using simpler building blocks.&lt;/p>
&lt;p>In the slides, the topic is introduced as part of three closely connected goals:
how to summarise matrices,
how matrices can be decomposed,
and how the decompositions can be used for matrix approximations.&lt;/p></description></item><item><title>Characteristic Polynomial</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/characteristic-polynomial/</guid><description>&lt;h1 id="characteristic-polynomial">
 Characteristic Polynomial
 
 &lt;a class="anchor" href="#characteristic-polynomial">#&lt;/a>
 
&lt;/h1>
&lt;p>The &lt;strong>characteristic polynomial&lt;/strong> of a square matrix is the key tool used to compute &lt;strong>eigenvalues&lt;/strong>.&lt;/p>
&lt;p>It connects:&lt;/p>
&lt;ul>
&lt;li>Determinants&lt;/li>
&lt;li>Trace&lt;/li>
&lt;li>Eigenvalues&lt;/li>
&lt;li>Matrix structure&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>Let&lt;br>

&lt;span>
 \( A \in \mathbb{R}^{n \times n} \)
 &lt;/span>

&lt;br>
and 
&lt;span>
 \( \lambda \in \mathbb{R} \)
 &lt;/span>

.&lt;/p>
&lt;p>The &lt;strong>characteristic polynomial&lt;/strong> of (A) is defined as:&lt;/p>
&lt;blockquote class="book-hint danger">
&lt;link rel="stylesheet" href="https://arshadhs.github.io/katex/katex.min.css" />
&lt;script defer src="https://arshadhs.github.io/katex/katex.min.js">&lt;/script>
 &lt;script defer src="https://arshadhs.github.io/katex/auto-render.min.js" onload="renderMathInElement(document.body, {
 &amp;#34;delimiters&amp;#34;: [
 {&amp;#34;left&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$$&amp;#34;, &amp;#34;display&amp;#34;: true},
 {&amp;#34;left&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;$&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\(&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\)&amp;#34;, &amp;#34;display&amp;#34;: false},
 {&amp;#34;left&amp;#34;: &amp;#34;\\[&amp;#34;, &amp;#34;right&amp;#34;: &amp;#34;\\]&amp;#34;, &amp;#34;display&amp;#34;: true}
 ]
});">&lt;/script>
&lt;span>
 \[ 
p_A(\lambda) = \det(A - \lambda I)
 \]
 &lt;/span>
&lt;/blockquote>
&lt;p>It is a polynomial in 
&lt;span>
 \( \lambda \)
 &lt;/span>

 of degree (n).&lt;/p></description></item><item><title>Determinant and Trace</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/010-determinant-and-trace/</guid><description>&lt;h1 id="determinant-and-trace">
 Determinant and Trace
 
 &lt;a class="anchor" href="#determinant-and-trace">#&lt;/a>
 
&lt;/h1>
&lt;hr>
&lt;h2 id="minor">
 Minor
 
 &lt;a class="anchor" href="#minor">#&lt;/a>
 
&lt;/h2>
&lt;p>The &lt;strong>minor&lt;/strong> of an element 
&lt;span>
 \( a_{ij} \)
 &lt;/span>

 is the determinant of the smaller square matrix formed by:&lt;/p>
&lt;ul>
&lt;li>removing &lt;strong>row&lt;/strong> 
&lt;span>
 \( i \)
 &lt;/span>

&lt;/li>
&lt;li>removing &lt;strong>column&lt;/strong> 
&lt;span>
 \( j \)
 &lt;/span>

&lt;/li>
&lt;/ul>
&lt;p>The minor is denoted 
&lt;span>
 \( M_{ij} \)
 &lt;/span>

.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Minors are used to compute &lt;strong>cofactors&lt;/strong>, which are used for determinants and inverses (via adjoint/adjugate).&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="cofactor">
 Cofactor
 
 &lt;a class="anchor" href="#cofactor">#&lt;/a>
 
&lt;/h2>
&lt;p>The &lt;strong>cofactor&lt;/strong> of 
&lt;span>
 \( a_{ij} \)
 &lt;/span>

, denoted 
&lt;span>
 \( C_{ij} \)
 &lt;/span>

, is:&lt;/p></description></item><item><title>Eigenvalues and Eigenvectors</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/020-eigenvalues-and-eigenvectors/</guid><description>&lt;h1 id="eigenvalues-and-eigenvectors">
 Eigenvalues and Eigenvectors
 
 &lt;a class="anchor" href="#eigenvalues-and-eigenvectors">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Eigenvalues give scaling.&lt;/li>
&lt;li>Eigenvectors define invariant directions of transformation.&lt;/li>
&lt;/ul>
&lt;p>Eigenvalues and eigenvectors describe directions that remain unchanged under a linear transformation, except for scaling.&lt;/p>
&lt;p>From lectures:
matrix multiplication represents a transformation of space.&lt;br>
Most vectors change direction and magnitude.&lt;br>
Some special vectors only scale.&lt;br>
These are eigenvectors.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
A matrix transformation stretches or compresses vectors.
Eigenvectors are directions that remain unchanged.
Eigenvalues tell how much scaling happens.&lt;/p></description></item><item><title>Cholesky Decomposition</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/030-cholesky-decomposition/</guid><description>&lt;h1 id="cholesky-decomposition">
 Cholesky Decomposition
 
 &lt;a class="anchor" href="#cholesky-decomposition">#&lt;/a>
 
&lt;/h1>
&lt;p>Cholesky decomposition is a special matrix factorisation used for symmetric positive definite matrices.&lt;/p>
&lt;p>From lecture discussions, this decomposition is powerful because it reduces a matrix into a triangular form, making computations easier and more stable.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Cholesky decomposition expresses a matrix as a product of a lower triangular matrix and its transpose.
It is efficient and numerically stable.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h2>
&lt;p>For a symmetric positive definite matrix:&lt;/p></description></item><item><title>Eigen Decomposition</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/040-eigen-decomposition/</guid><description>&lt;h1 id="eigen-decomposition">
 Eigen Decomposition
 
 &lt;a class="anchor" href="#eigen-decomposition">#&lt;/a>
 
&lt;/h1>
&lt;p>Eigen decomposition expresses a matrix using its eigenvectors and eigenvalues.&lt;/p>
&lt;p>From lecture discussions, this is one of the most important ways to understand the internal structure of a matrix.&lt;/p>
&lt;p>Instead of treating the matrix as a black box, eigen decomposition reveals its fundamental directions and scaling behaviour.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
Eigen decomposition rewrites a matrix in terms of directions (eigenvectors) and scaling factors (eigenvalues).
This makes complex transformations easier to understand and compute.&lt;/p></description></item><item><title>Diagonalization</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/diagonalization/</guid><description>&lt;h1 id="diagonalization">
 Diagonalization
 
 &lt;a class="anchor" href="#diagonalization">#&lt;/a>
 
&lt;/h1>
&lt;p>Diagonalisation expresses a matrix using its eigenvectors and eigenvalues when possible.&lt;/p>
&lt;p>From lecture explanation, diagonalisation is one of the most powerful tools because it converts a complicated matrix into a much simpler form.&lt;/p>
&lt;p>Instead of working with a full matrix, we work with a diagonal matrix, which is much easier to analyse and compute.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
If a matrix has enough independent eigenvectors, it can be rewritten as a diagonal matrix using a change of basis.
This simplifies matrix operations significantly.&lt;/p></description></item><item><title>Singular Value Decomposition (SVD)</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/050-singular-value-decomposition/</guid><description>&lt;h1 id="singular-value-decomposition-svd">
 Singular Value Decomposition (SVD)
 
 &lt;a class="anchor" href="#singular-value-decomposition-svd">#&lt;/a>
 
&lt;/h1>
&lt;p>Singular Value Decomposition (SVD) is one of the most important matrix decomposition techniques in linear algebra and machine learning.&lt;/p>
&lt;p>It factorises any matrix into three simpler matrices that reveal its structure.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Key Idea:
SVD decomposes a matrix into rotations + scaling.
It tells us how data is transformed along orthogonal directions.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h1 id="definition">
 Definition
 
 &lt;a class="anchor" href="#definition">#&lt;/a>
 
&lt;/h1>
&lt;p>For any matrix in real space:

&lt;span style="color: green;">
 &lt;span>
 \[ 
A \in \mathbb{R}^{m \times n}
 \]
 &lt;/span>

&lt;/span>&lt;/p></description></item><item><title>Matrix Approximation</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/060-matrix-approximation/</guid><description>&lt;h1 id="matrix-approximation">
 Matrix Approximation
 
 &lt;a class="anchor" href="#matrix-approximation">#&lt;/a>
 
&lt;/h1>
&lt;p>Low-rank approximation keeps the most important structure while reducing noise and computation.&lt;/p>
&lt;hr>
&lt;h2 id="low-rank-approximation">
 Low-Rank Approximation
 
 &lt;a class="anchor" href="#low-rank-approximation">#&lt;/a>
 
&lt;/h2>
&lt;p>Used for:&lt;/p>
&lt;ul>
&lt;li>Dimensionality reduction&lt;/li>
&lt;li>Noise removal&lt;/li>
&lt;li>Efficient computation&lt;/li>
&lt;/ul>
&lt;p>Forms the basis of &lt;strong>PCA&lt;/strong>.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/03-matrix-decomposition/">
 Matrix Decompositions
&lt;/a>&lt;/p></description></item><item><title>Vector Calculus</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/</guid><description>&lt;h1 id="vector-calculus">
 Vector Calculus
 
 &lt;a class="anchor" href="#vector-calculus">#&lt;/a>
 
&lt;/h1>
&lt;p>Vector calculus extends differentiation to multivariate and vector-valued functions.&lt;/p>
&lt;p>Gradients power learning. This section builds differentiation skills needed for backpropagation.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/010-univariate-differentiation/">Differentiation of Univariate Functions&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/020-partial-derivatives-and-gradients/">Partial Differentiation and Gradients&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/030-vector-and-matrix-gradients/">Gradients of Vector-Valued and Matrix Functions&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/050-gradient-identities/">Useful Gradient Identities&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/060-backpropagation/">Backpropagation and Automatic Differentiation&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/070-higher-order-derivatives/">Higher-order derivatives&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/080-taylors-series/">Taylor’s series&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/04-vector-calculus/090-maxima-and-minima/">Maxima and Minima&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart TD

 %% Core Node
 PD[&amp;#34;Partial Derivatives&amp;#34;]

 %% Supporting Concepts
 DQ[&amp;#34;Difference Quotient&amp;#34;]
 JH[&amp;#34;Jacobian / Hessian&amp;#34;]
 TS[&amp;#34;Taylor Series&amp;#34;]

 %% Application Chapters
 CH6[&amp;#34;&amp;lt;br/&amp;gt;Probability&amp;#34;]
 CH7[&amp;#34;&amp;lt;br/&amp;gt;Optimization&amp;#34;]
 CH9[&amp;#34;&amp;lt;br/&amp;gt;Regression&amp;#34;]
 CH10[&amp;#34;&amp;lt;br/&amp;gt;Dimensionality Reduction&amp;#34;]
 CH11[&amp;#34;&amp;lt;br/&amp;gt;Density Estimation&amp;#34;]
 CH12[&amp;#34;&amp;lt;br/&amp;gt;Classification&amp;#34;]

 %% Relationships
 DQ --&amp;gt;|defines| PD
 PD --&amp;gt;|collected in| JH
 JH --&amp;gt;|used in| TS
 JH --&amp;gt;|used in| CH6
	
 PD --&amp;gt;|used in| CH7
 PD --&amp;gt;|used in| CH9
 PD --&amp;gt;|used in| CH10
 PD --&amp;gt;|used in| CH11
 PD --&amp;gt;|used in| CH12

 %% Styling (Your Soft Academic Palette)
 style PD fill:#90CAF9,stroke:#1E88E5,color:#000

 style DQ fill:#CE93D8,stroke:#8E24AA,color:#000
 style JH fill:#CE93D8,stroke:#8E24AA,color:#000
 style TS fill:#CE93D8,stroke:#8E24AA,color:#000
 style CH6 fill:#CE93D8,stroke:#8E24AA,color:#000
	
 style CH7 fill:#C8E6C9,stroke:#2E7D32,color:#000
 style CH9 fill:#C8E6C9,stroke:#2E7D32,color:#000
 style CH10 fill:#C8E6C9,stroke:#2E7D32,color:#000
 style CH11 fill:#C8E6C9,stroke:#2E7D32,color:#000
 style CH12 fill:#C8E6C9,stroke:#2E7D32,color:#000

&lt;/pre>

&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/">
 Calculus
&lt;/a>&lt;/p></description></item><item><title>Continuous Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/</guid><description>&lt;h1 id="continuous-optimisation">
 Continuous Optimisation
 
 &lt;a class="anchor" href="#continuous-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Optimisation finds parameters that minimise (or maximise) an objective function.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/">Optimisation using Gradient Descent&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/">Constrained Optimisation&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/">Lagrange Multipliers&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/">Convex Optimisation&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/">
 Calculus
&lt;/a>&lt;/p></description></item><item><title>Optimisation using Gradient Descent</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/gradient-descent/</guid><description>&lt;h1 id="optimisation-using-gradient-descent">
 Optimisation using Gradient Descent
 
 &lt;a class="anchor" href="#optimisation-using-gradient-descent">#&lt;/a>
 
&lt;/h1>
&lt;p>Gradient descent is an optimisation algorithm used to train ML and neural networks.&lt;/p>
&lt;ul>
&lt;li>Gradient descent updates parameters by moving opposite the gradient.&lt;/li>
&lt;/ul>
&lt;p>Trains ML models by minimising errors:&lt;/p>
&lt;ul>
&lt;li>between predicted and actual results&lt;/li>
&lt;li>by iteratively adjusting its parameters&lt;/li>
&lt;li>moves step‑by‑step in the direction of the steepest decrease in the loss function, it helps ML models learn the best possible weights for better predictions&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="types-of-gradient-gescent-learning-algorithms">
 Types of Gradient Gescent learning algorithms
 
 &lt;a class="anchor" href="#types-of-gradient-gescent-learning-algorithms">#&lt;/a>
 
&lt;/h2>
&lt;ol>
&lt;li>Batch gradient descent&lt;/li>
&lt;li>Stochastic gradient descent&lt;/li>
&lt;li>Mini-batch gradient descent&lt;/li>
&lt;/ol>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Constrained Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/constrained-optimisation/</guid><description>&lt;h1 id="constrained-optimisation">
 Constrained Optimisation
 
 &lt;a class="anchor" href="#constrained-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Optimisation with constraints (equalities/inequalities).&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Lagrange Multipliers</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/lagrange-multipliers/</guid><description>&lt;h1 id="lagrange-multipliers">
 Lagrange Multipliers
 
 &lt;a class="anchor" href="#lagrange-multipliers">#&lt;/a>
 
&lt;/h1>
&lt;p>Transforms constrained problems into unconstrained ones using Lagrangians.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Convex Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/convex-optimisation/</guid><description>&lt;h1 id="convex-optimisation">
 Convex Optimisation
 
 &lt;a class="anchor" href="#convex-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;p>Convex objectives have a single global minimum, making optimisation reliable.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/05-optimisation/">
 Continuous Optimisation
&lt;/a>&lt;/p></description></item><item><title>Nonlinear Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/</guid><description>&lt;h1 id="nonlinear-optimisation-in-machine-learning">
 Nonlinear Optimisation in Machine Learning
 
 &lt;a class="anchor" href="#nonlinear-optimisation-in-machine-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Practical training challenges and modern optimisers used in ML.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/">Challenges in Gradient-Based Optimisation&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/">Stochastic Gradient Descent (SGD)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/">Momentum-Based Learning&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/">Adaptive Methods: AdaGrad, RMSProp, Adam&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/">Tuning Hyperparameters and Preprocessing&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/">
 Calculus
&lt;/a>&lt;/p></description></item><item><title>Challenges in Gradient-Based Optimisation</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/optimisation-challenges/</guid><description>&lt;h1 id="challenges-in-gradient-based-optimisation">
 Challenges in Gradient-Based Optimisation
 
 &lt;a class="anchor" href="#challenges-in-gradient-based-optimisation">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Local optima and flat regions&lt;/li>
&lt;li>Differential curvature&lt;/li>
&lt;li>Difficult topologies (cliffs and valleys)&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Stochastic Gradient Descent (SGD)</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/stochastic-gradient-descent/</guid><description>&lt;h1 id="stochastic-gradient-descent-sgd">
 Stochastic Gradient Descent (SGD)
 
 &lt;a class="anchor" href="#stochastic-gradient-descent-sgd">#&lt;/a>
 
&lt;/h1>
&lt;p>SGD uses mini-batches to trade exact gradients for speed and generalisation.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Momentum-Based Learning</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/momentum-methods/</guid><description>&lt;h1 id="momentum-based-learning">
 Momentum-Based Learning
 
 &lt;a class="anchor" href="#momentum-based-learning">#&lt;/a>
 
&lt;/h1>
&lt;p>Momentum smooths updates and helps traverse valleys efficiently.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Adaptive Methods: AdaGrad, RMSProp, Adam</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/adaptive-methods/</guid><description>&lt;h1 id="adaptive-methods-adagrad-rmsprop-adam">
 Adaptive Methods: AdaGrad, RMSProp, Adam
 
 &lt;a class="anchor" href="#adaptive-methods-adagrad-rmsprop-adam">#&lt;/a>
 
&lt;/h1>
&lt;p>Adaptive methods adjust learning rates per-parameter.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Tuning Hyperparameters and Preprocessing</title><link>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/hyperparameter-tuning/</guid><description>&lt;h1 id="tuning-hyperparameters-and-preprocessing">
 Tuning Hyperparameters and Preprocessing
 
 &lt;a class="anchor" href="#tuning-hyperparameters-and-preprocessing">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Learning rate schedules&lt;/li>
&lt;li>Initialisation&lt;/li>
&lt;li>Tuning hyperparameters&lt;/li>
&lt;li>Importance of feature preprocessing&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/020-calculus/06-nonlinear-optimisation/">
 Nonlinear Optimisation
&lt;/a>&lt;/p></description></item><item><title>Dimensionality reduction and PCA</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/</guid><description>&lt;h1 id="dimensionality-reduction-and-pca">
 Dimensionality reduction and PCA
 
 &lt;a class="anchor" href="#dimensionality-reduction-and-pca">#&lt;/a>
 
&lt;/h1>
&lt;p>PCA and SVM connect linear algebra, geometry, and optimisation.&lt;/p>
&lt;hr>




&lt;ul>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/">Principal Component Analysis (PCA)&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/">PCA Theory&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/">PCA in Practice&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/">Latent Variable Perspective&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/">Mathematical Preliminaries of SVM&lt;/a>
 &lt;/li>
 
 
 
 
 &lt;li>
 &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/">Nonlinear SVM and Kernels&lt;/a>
 &lt;/li>
 
 

 
 
&lt;/ul>


&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/">
 Linear Algebra
&lt;/a>&lt;/p></description></item><item><title>Principal Component Analysis (PCA)</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca/</guid><description>&lt;h1 id="principal-component-analysis-pca">
 Principal Component Analysis (PCA)
 
 &lt;a class="anchor" href="#principal-component-analysis-pca">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>dimensionality reduction technique&lt;/li>
&lt;li>helps us to &lt;strong>reduce the number of features&lt;/strong> in a dataset while keeping the most important information.&lt;/li>
&lt;li>changes complex datasets by transforming correlated features into a smaller set of uncorrelated components.&lt;/li>
&lt;li>uses &lt;strong>linear algebra&lt;/strong> to transform data into &lt;strong>new features&lt;/strong> called principal components.&lt;/li>
&lt;li>finds these by calculating &lt;strong>eigenvectors (directions)&lt;/strong> and &lt;strong>eigenvalues (importance)&lt;/strong> from the &lt;strong>covariance matrix&lt;/strong>.&lt;/li>
&lt;li>PCA &lt;strong>selects the top components with the highest eigenvalues&lt;/strong> and &lt;strong>projects the data onto them simplify the dataset&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;blockquote class="book-hint default">
&lt;p>PCA prioritizes the directions where the data varies the most because more variation = more useful information.&lt;/p></description></item><item><title>PCA Theory</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-theory/</guid><description>&lt;h1 id="pca-theory">
 PCA Theory
 
 &lt;a class="anchor" href="#pca-theory">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Problem setting&lt;/li>
&lt;li>Maximum variance perspective&lt;/li>
&lt;li>Projection perspective&lt;/li>
&lt;li>Eigenvector and low-rank approximations&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>PCA in Practice</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/pca-practice/</guid><description>&lt;h1 id="pca-in-practice">
 PCA in Practice
 
 &lt;a class="anchor" href="#pca-in-practice">#&lt;/a>
 
&lt;/h1>
&lt;p>Key steps of PCA in practice, including considerations in high dimensions.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Latent Variable Perspective</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/latent-variable-view/</guid><description>&lt;h1 id="latent-variable-perspective">
 Latent Variable Perspective
 
 &lt;a class="anchor" href="#latent-variable-perspective">#&lt;/a>
 
&lt;/h1>
&lt;p>PCA can be interpreted as modelling data using a smaller number of latent variables.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Mathematical Preliminaries of SVM</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/svm-mathematical-foundations/</guid><description>&lt;h1 id="mathematical-preliminaries-of-svm">
 Mathematical Preliminaries of SVM
 
 &lt;a class="anchor" href="#mathematical-preliminaries-of-svm">#&lt;/a>
 
&lt;/h1>
&lt;ul>
&lt;li>Primal and dual perspectives&lt;/li>
&lt;li>Geometry of margins&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>Nonlinear SVM and Kernels</title><link>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/kernels/</guid><description>&lt;h1 id="nonlinear-svm-and-kernels">
 Nonlinear SVM and Kernels
 
 &lt;a class="anchor" href="#nonlinear-svm-and-kernels">#&lt;/a>
 
&lt;/h1>
&lt;p>Kernels allow inner products in high-dimensional feature spaces without explicit mapping.&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://arshadhs.github.io/">Home&lt;/a> | &lt;a href="https://arshadhs.github.io/docs/ai/maths/010-linear-algebra/07-dimensionality-reduction/">
 Dimensionality reduction and PCA
&lt;/a>&lt;/p></description></item><item><title>AI Learning Resources</title><link>https://arshadhs.github.io/docs/ai/foundation/ai-notes/</link><pubDate>Sat, 03 Jan 2026 12:00:00 +0100</pubDate><guid>https://arshadhs.github.io/docs/ai/foundation/ai-notes/</guid><description>&lt;h1 id="ai-learning-resources">
 AI Learning Resources
 
 &lt;a class="anchor" href="#ai-learning-resources">#&lt;/a>
 
&lt;/h1>
&lt;p>A curated list of &lt;strong>high-quality online courses&lt;/strong> to learn Artificial Intelligence, Machine Learning, and Deep Learning from reputable universities and organisations.&lt;/p>
&lt;hr>
&lt;h2 id="recommended-books--references">
 Recommended Books &amp;amp; References
 
 &lt;a class="anchor" href="#recommended-books--references">#&lt;/a>
 
&lt;/h2>
&lt;hr>
&lt;h3 id="deep-neural-networks-dnn">
 Deep Neural Networks (DNN)
 
 &lt;a class="anchor" href="#deep-neural-networks-dnn">#&lt;/a>
 
&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Deep Learning&lt;/strong>. MIT Press.&lt;br>
Goodfellow, I., Bengio, Y., &amp;amp; Courville, A. (2016). (Vol. 1, No. 2).&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Introduction to Deep Learning&lt;/strong>. MIT Press.&lt;br>
Eugene, C. (2019).&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Deep Learning with Python&lt;/strong>. Simon &amp;amp; Schuster.&lt;br>
Chollet, F. (2021).&lt;/p></description></item><item><title>ML Pipeline</title><link>https://arshadhs.github.io/docs/ai/machine-learning/99-ml-pipeline-model/</link><pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/machine-learning/99-ml-pipeline-model/</guid><description>&lt;h1 id="machine-learning-pipeline-preprocessing--models">
 Machine Learning Pipeline: Preprocessing &amp;amp; Models
 
 &lt;a class="anchor" href="#machine-learning-pipeline-preprocessing--models">#&lt;/a>
 
&lt;/h1>
&lt;p>This page explains both &lt;strong>data preprocessing&lt;/strong> and &lt;strong>model development concepts&lt;/strong> in a clear, structured way to support understanding.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>A complete ML pipeline includes preprocessing, feature engineering, feature selection, and model training.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h1 id="1-data-preprocessing-overview">
 1. Data Preprocessing Overview
 
 &lt;a class="anchor" href="#1-data-preprocessing-overview">#&lt;/a>
 
&lt;/h1>
&lt;p>Raw data is often:&lt;/p>
&lt;ul>
&lt;li>Noisy&lt;/li>
&lt;li>Incomplete&lt;/li>
&lt;li>Inconsistent&lt;/li>
&lt;/ul>
&lt;p>Preprocessing ensures data is suitable for machine learning.&lt;/p>
&lt;hr>
&lt;h1 id="2-missing-values">
 2. Missing Values
 
 &lt;a class="anchor" href="#2-missing-values">#&lt;/a>
 
&lt;/h1>
&lt;p>&lt;strong>Why they occur&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Sensor errors&lt;/li>
&lt;li>Data collection issues&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Methods&lt;/strong>&lt;/p></description></item></channel></rss>