<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>MLP on Arshad Siddiqui</title><link>https://arshadhs.github.io/tags/mlp/</link><description>Recent content in MLP on Arshad Siddiqui</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 26 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://arshadhs.github.io/tags/mlp/index.xml" rel="self" type="application/rss+xml"/><item><title>Deep Feedforward Neural Networks (DFNN) for Classification</title><link>https://arshadhs.github.io/docs/ai/deep-learning/050-deep-feedforward/</link><pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate><guid>https://arshadhs.github.io/docs/ai/deep-learning/050-deep-feedforward/</guid><description>&lt;h1 id="deep-feedforward-neural-networks-dfnn-or-multi-layer-perceptrons-mlp-for-classification">
 Deep Feedforward Neural Networks (DFNN) or Multi Layer Perceptrons (MLP) for Classification
 
 &lt;a class="anchor" href="#deep-feedforward-neural-networks-dfnn-or-multi-layer-perceptrons-mlp-for-classification">#&lt;/a>
 
&lt;/h1>
&lt;p>A &lt;strong>Deep Feedforward Neural Network (DFNN)&lt;/strong>, also called a &lt;strong>Multi-Layer Perceptron (MLP)&lt;/strong>, is a neural network with one or more &lt;strong>hidden layers&lt;/strong> where information flows &lt;strong>forward only&lt;/strong> (no recurrence).&lt;br>
For classification, DFNNs learn &lt;strong>non-linear decision boundaries&lt;/strong> by combining hidden layers with &lt;strong>non-linear activation functions&lt;/strong>.&lt;/p>
&lt;blockquote class="book-hint info">
&lt;p>Core idea:&lt;/p>
&lt;ul>
&lt;li>A single neuron can only learn &lt;strong>linear&lt;/strong> boundaries.&lt;/li>
&lt;li>Adding &lt;strong>hidden layers + non-linearity&lt;/strong> allows DFNNs to solve problems like &lt;strong>XOR&lt;/strong>.&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;hr>
&lt;h2 id="mlp-as-solution-for-xor">
 MLP as solution for XOR
 
 &lt;a class="anchor" href="#mlp-as-solution-for-xor">#&lt;/a>
 
&lt;/h2>
&lt;p>A single perceptron fails on XOR because XOR is &lt;strong>not linearly separable&lt;/strong>.&lt;/p></description></item></channel></rss>