Saturday 12 January 2019

F-1 Score, Precision and Recall

Hi,

Initially, when I started to learn machine learning, I was not able to grasp the terms F-1 Score, precision and recall along with False Positive and False Negative( ya these 2 are more confusing then True Positive and True Negative).

Reading post per post and still not able to get anything in my mind. So One day I sat and just cleared it out. Ok, Fine Here are the Details. Let's start with abbreviation.
Let's Create the basic table which we see in every post.

 Predicted 
  Yes No
ActualYesTPFN
 NoFPTN
TP -  True  Positive      ----> Predicted is Positive and that is True
TN - True  Negative   ----> Predicted is Negative and that is True
FP - False Positive    ----> Predicted is Positive and that is False
FN - False Negative  ----> Predicted is Negative and that is False

So In TP/TN/FP/FN ending Positive, Negative tell us about the prediction. and the Initial True, False tell us "Is that Correct or Not?"

So False Positive means that the prediction is "Positive" but and False Indicate it is Wrong. For example an data corresponds to  class 0 but our classifier predicted the class 1.

Now same can go for False negative. I am leaving that up to you.

Now Here comes the Precision and Recall.

Precision

precison = TP/(TP+FP)

                = (Correct Yes Predicted by Model)/ (Total Yes predicted by Model [TP+FP])

This tells us "How much our model predicted correctly out of Total True predicted by Model(TP+FP)"

Recall

recall = TP/(TP+FN)
         
         = (Correct Yes Predicted by Model)/ (Total Yes in Actual Data[TP+FN])

This tells us "How much our model predict correctly out of total True in Data"


Recall address the question: "Given the positive example, will the classifier detect it".
While Precision address the question: "Given a positive prediction from classifier, how likely is it to be correct".

The image is just for my personal purpose.