Deep Feedforward Neural Networks (DFNN) for Classification
Deep Feedforward Neural Networks (DFNN) or Multi Layer Perceptrons (MLP) for Classification #
A Deep Feedforward Neural Network (DFNN), also called a Multi-Layer Perceptron (MLP), is a neural network with one or more hidden layers where information flows forward only (no recurrence).
For classification, DFNNs learn non-linear decision boundaries by combining hidden layers with non-linear activation functions.
Core idea:
- A single neuron can only learn linear boundaries.
- Adding hidden layers + non-linearity allows DFNNs to solve problems like XOR.
MLP as solution for XOR #
A single perceptron fails on XOR because XOR is not linearly separable.