Understanding the Perceptron

Arpan Mishra
3 min readAug 2, 2024

--

A perceptron is a fundamental building block in neural networks. It’s a mathematical model inspired by the biological neuron, designed to perform binary classification tasks. In essence, it takes multiple inputs, processes them through a linear function, and produces a single output.

Components of a Perceptron:

  1. Inputs (x1, x2, …, xn): These are the features or attributes of the data.
  2. Weights (w1, w2, …, wn): Each input is associated with a weight that determines its importance.
  3. Bias (b): A constant value added to the weighted sum.
  4. Net input (z): The weighted sum of inputs and bias: z = w1x1 + w2x2 + … + wn*xn + b.
  5. Activation function: A function that determines the output based on the net input. In the simplest perceptron, it’s a step function:
  • Output = 1 if z >= 0
  • Output = 0 if z < 0

The Perceptron Learning Algorithm

The goal of the perceptron learning algorithm is to adjust the weights and bias to correctly classify the input data. It’s an iterative process that continues until the perceptron makes correct predictions for all training examples.

Steps:

1. Initialization:

  • Randomly initialize the weights and bias.
  • Set a learning rate (alpha), which determines the step size for weight updates.

2. Iteration: For each training example (x, y):

  • Calculate the net input: z = w1x1 + w2x2 + … + wn*xn + b.
  • Apply the activation function to get the predicted output (y_hat).
  • Update the weights and bias using the following rules:
  • w_i = w_i + alpha * (y — y_hat) * x_i
  • b = b + alpha * (y — y_hat)

3. Convergence: Repeat step 2 until the perceptron makes correct predictions for all training examples or a maximum number of iterations is reached.

Mathematical Representation

For a given input vector x=(x1,x2,…,xn)x = (x_1, x_2, …, x_n)x=(x1​,x2​,…,xn​) and weights w=(w1,w2,…,wn)w = (w_1, w_2, …, w_n)w=(w1​,w2​,…,wn​), the perceptron computes the weighted sum:

The output of the perceptron, y_hat​, is determined by the activation function:

Additional Notes:

  • The Perceptron Convergence Theorem states that if there exists a weight vector w∗w^*w∗ and bias b∗b^*b∗ such that all training examples are correctly classified, the perceptron learning algorithm will converge to a solution that correctly classifies all training examples in a finite number of steps. In other words, if the training data is linearly separable, the perceptron algorithm is guaranteed to find a separating hyperplane.
  • The bias term in the perceptron algorithm plays a crucial role in the Perceptron Convergence Theorem and in the overall performance of the perceptron model. Without a bias term, the decision boundary created by the perceptron (a hyperplane) is constrained to pass through the origin. This is because, without bias, the decision function is: y_hat=sign(w⋅x). When a bias term b is included, the decision function becomes: y_hat=sign(w⋅x+b). This allows the decision boundary to shift away from the origin, giving the perceptron more flexibility to find a hyperplane that separates the data, especially when the optimal decision boundary does not pass through the origin.

--

--

Arpan Mishra
Arpan Mishra

Written by Arpan Mishra

Passionate Data Scientist at Juniper Networks, blending healthcare,finance & supply chain expertise with tech prowess to drive impactful insights.

No responses yet