<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Classification on Arshad Siddiqui</title><link>https://arshadhs.github.io/tags/classification/</link><description>Recent content in Classification on Arshad Siddiqui</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 26 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://arshadhs.github.io/tags/classification/index.xml" rel="self" type="application/rss+xml"/><item><title>LNN for Classification</title><link>https://arshadhs.github.io/docs/ai/deep-learning/040-linear-neural-networks-for-classification/</link><pubDate>Sun, 15 Feb 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/deep-learning/040-linear-neural-networks-for-classification/</guid><description>&lt;h1 id="linear-nn-for-classification">
 Linear NN for Classification
 
 &lt;a class="anchor" href="#linear-nn-for-classification">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>Linear Neural Network (LNN) for classification&lt;/strong> uses &lt;strong>no hidden layers&lt;/strong>.&lt;br>
It learns a &lt;strong>linear decision boundary&lt;/strong> and outputs &lt;strong>class probabilities&lt;/strong>, then converts them into predicted classes.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Neural-network view:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Binary classification&lt;/strong> → logistic regression (single neuron + sigmoid)&lt;/li>
&lt;li>&lt;strong>Multi-class classification&lt;/strong> → softmax regression (K output neurons + softmax)&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;hr>


&lt;script src="https://arshadhs.github.io/mermaid.min.js">&lt;/script>

 &lt;script>mermaid.initialize({
 "flowchart": {
 "useMaxWidth":true
 },
 "theme": "default"
}
)&lt;/script>




&lt;pre class="mermaid">
flowchart LR
 D[&amp;#34;Data&amp;lt;br/&amp;gt;X, y&amp;#34;] --&amp;gt; M[&amp;#34;Linear model&amp;lt;br/&amp;gt;w, b&amp;#34;]
 M --&amp;gt; A[&amp;#34;Activation&amp;lt;br/&amp;gt;Sigmoid / Softmax&amp;#34;]
 A --&amp;gt; L[&amp;#34;Loss&amp;lt;br/&amp;gt;Cross-entropy&amp;#34;]
 L --&amp;gt; O[&amp;#34;Optimiser&amp;lt;br/&amp;gt;Mini-batch GD / Adam&amp;#34;]
 O --&amp;gt; P[&amp;#34;Updated parameters&amp;lt;br/&amp;gt;w, b&amp;#34;]
 P --&amp;gt; I[&amp;#34;Inference&amp;lt;br/&amp;gt;Probabilities → class&amp;#34;]

 %% Pastel colour scheme
 style D fill:#E3F2FD,stroke:#1E88E5,stroke-width:1px
 style M fill:#E8F5E9,stroke:#43A047,stroke-width:1px
 style A fill:#FFF3E0,stroke:#FB8C00,stroke-width:1px
 style L fill:#FCE4EC,stroke:#D81B60,stroke-width:1px
 style O fill:#F3E5F5,stroke:#8E24AA,stroke-width:1px
 style P fill:#E0F7FA,stroke:#00838F,stroke-width:1px
 style I fill:#F1F8E9,stroke:#558B2F,stroke-width:1px
&lt;/pre>

&lt;hr>
&lt;h2 id="classification">
 Classification
 
 &lt;a class="anchor" href="#classification">#&lt;/a>
 
&lt;/h2>
&lt;p>Classification predicts a &lt;strong>discrete class label&lt;/strong>.&lt;br>
Common settings:&lt;/p></description></item><item><title>Deep Feedforward Neural Networks (DFNN) for Classification</title><link>https://arshadhs.github.io/docs/ai/deep-learning/050-deep-feedforward/</link><pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/deep-learning/050-deep-feedforward/</guid><description>&lt;h1 id="deep-feedforward-neural-networks-dfnn-or-multi-layer-perceptrons-mlp-for-classification">
 Deep Feedforward Neural Networks (DFNN) or Multi Layer Perceptrons (MLP) for Classification
 
 &lt;a class="anchor" href="#deep-feedforward-neural-networks-dfnn-or-multi-layer-perceptrons-mlp-for-classification">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>Deep Feedforward Neural Network (DFNN)&lt;/strong>, also called a &lt;strong>Multi-Layer Perceptron (MLP)&lt;/strong>, is a neural network with one or more &lt;strong>hidden layers&lt;/strong> where information flows &lt;strong>forward only&lt;/strong> (no recurrence).&lt;br>
For classification, DFNNs learn &lt;strong>non-linear decision boundaries&lt;/strong> by combining hidden layers with &lt;strong>non-linear activation functions&lt;/strong>.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Core idea:&lt;/p>
&lt;ul>
&lt;li>A single neuron can only learn &lt;strong>linear&lt;/strong> boundaries.&lt;/li>
&lt;li>Adding &lt;strong>hidden layers + non-linearity&lt;/strong> allows DFNNs to solve problems like &lt;strong>XOR&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="mlp-as-solution-for-xor">
 MLP as solution for XOR
 
 &lt;a class="anchor" href="#mlp-as-solution-for-xor">#&lt;/a>
 
&lt;/h2>
&lt;p>A single perceptron fails on XOR because XOR is &lt;strong>not linearly separable&lt;/strong>.&lt;/p></description></item><item><title>Support Vector Machine</title><link>https://arshadhs.github.io/docs/ai/machine-learning/07-support-vector-machines/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/machine-learning/07-support-vector-machines/</guid><description>&lt;h1 id="support-vector-machine-svm">
 Support Vector Machine (SVM)
 
 &lt;a class="anchor" href="#support-vector-machine-svm">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>Support Vector Machine (SVM)&lt;/strong> is a &lt;strong>supervised machine learning algorithm&lt;/strong> used for:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Classification&lt;/strong> (most common)&lt;/li>
&lt;li>&lt;strong>Regression&lt;/strong> (SVR – Support Vector Regression)&lt;/li>
&lt;/ul>

&lt;blockquote class='book-hint '>
 &lt;p>Find the decision boundary that separates classes with the &lt;strong>maximum margin&lt;/strong>.&lt;/p>
&lt;/blockquote>&lt;blockquote class="book-hint default">
&lt;p>A Support Vector Machine is a supervised learning algorithm that finds an optimal hyperplane by maximising the margin between classes, using support vectors and kernel functions to handle non-linear data.&lt;/p></description></item></channel></rss>