MP Neuron and Perceptron

Nitin Naidu
DataDrivenInvestor

--

With reference from session by Prof. Mitesh Khapra and Pratyush Kumar

MP Neuron model

The McCulloch-Pitts Neuron model relates to the theory developed in the 90s where Walter McCulloch and Warren Pitts put forward the idea. It is similar to a human neuron in Biology consisting of Axons, Soma etc. These are mapped to the functioning of the neural network.

The MP Neuron model is implemented according to the Six jars of Machine Learning in the earlier publish.

1) Data-

The MP neuron takes binary input and gives binary output. If the input data is not binary it can be compacted to binary before it can be feeded to the model.

2) Classification-

The classification is also binary which is 0 or 1. The model can give a yes or no answer based on the input and the threshold.

3) Model-

Image credit: One Fourth Labs

It consists of a function with a single parameter. The input is aggregated(g). There is a threshold value which is decided. If the value of the function is equal to or greater than the threshold value, it gives a positive output and vice versa.

The MP Neuron model basically draws a line where positive values lie on the line or above the line whereas the negative values below the line.

4) Loss function-

The squared loss function is applied. It finds the difference between the predicted value and the actual prediction as a square.

5) Learning -

Image credit: One Fourth Labs

Learning in MP Neuron consists of finding the threshold value with lowest error for prediction. This is done with brute force for a single parameter.

6) Accuracy-

Accuracy is given by the standard matrix of the division of the number of right predictions by the total number of predictions.

The MP Neuron basically helps to find a line that separates the positive value from the negative ones.

The disadvantages of MP Neuron are-

  1. Boolean input and output.
  2. Fixed slope
  3. Few intercepts possible
  4. Fixed parameters

Perceptron

The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research. The perceptron is also a simplified model of a biological neuron

The perceptron model fits the six jar as follows:

1)Data-

Data can be flexible with values other than binary. These inputs are standardized to input it to the function.

Image credit: One Fourth Labs

Standardization formula is given by x’ = x-min/max-min , where x is the input value, min is the minimum value in that particular feature and max the maximum value in that particular feature. This ensures that the inputs are of compacted value. The weights associated with the features help to give negative or positive value to the feature which in turn will affect the final value so as to cross the threshold or fall short of the threshold.

2) Classification-

The classification is binary based on the input.

3) Model-

The model is a function which may have multiple parameters(weights). The model works as a addition of the multiplication of input and the parameters(weights). The weights can be positive or negative based on its positive influence or negative influence towards the final prediction.

Image credit: One Fourth Labs

If the result is greater than the threshold it corresponds to a particular binary output and vice versa.

4) Loss function-

Image credit: One Fourth Labs

Loss is computed as follows:

If there is a deviation from the true output, the loss value is 1. If it is the same as the actual output, loss value is zero. The loss value tells the model that some correction needs to be done. For the correction the weights or the thresholds are adjusted.

5) Learning-

The general recipe for learning the parameters are as follows:

The loss is found for the given input and weights. If the loss is high it is used to find the ideal parameters by iterating through the data again and again till satisfied. Once the loss is relatively negligible. The ideal parameter is chosen.

Image credit: One Fourth Labs

The recipe for learning the parameters for perceptron are as follows:

The input along with the weights computed. If the multiplication of the weight and the input is less than zero while it should be positive, the weights are subtracted. If the multiplication is greater than equal to zero while it should be negative the weights are subtracted. This occurs in a while function to find the ideal parameters (i.e the ideal weights). The algorithm converges when all the inputs are classified correctly.

Explanation of addition and subtraction for parameters (weights)-

If the value is between 0 and 1 the angle is acute angle i.e between 0 and 90 degree. However, if it is between-1 and 0 it is obtuse angle i.e between 90 and 180 degree. The subtraction and addition are done so as to get the right angle so as to get the right output.

The learning algorithm will only work if the data is linearly separable.

6) Accuracy-

Accuracy is given by the standard matrix of the division of the number of right predictions by the total number of predictions.

The disadvantages of perceptron are-

The points in the graph should be positively and negatively seperable in a linear fashion.

That sums up the six jars for application of MP Neuron and Perceptron in deep learning.

References:

  1. Deep Learning by One Fourth Labs
  2. Wikipedia

--

--