Day 35 of 100 Days of AI

Precision & Recall (aka Sensitivity)

On classification models for ML, I still confuse precision for recall and vice versa. So it was good today that I came across this handy chart that illustrates the differences.

Chart from Zeya, 2021

A good precision score is really important in areas where getting a false positive is high. For example in email spam detection, it’s bad to falsely flag an email from a friend as spam. False positives should be minimised as much as possible in this case. The precision score — the percent measure of predicted true positives relative to “predicted true positives + falsely predicted positives” — is good to know in such cases.

A good recall (or sensitivity) score on the other hand is worth knowing particularly in areas where the cost of a false negative is very high. For example if we built a model that could predict which companies were going to succeed, we don’t want a model that falsley tells us a company is going to fail and we miss out on investing in it. The recall score — the percent measure of predicted true positives relative to all actual cases of positives — is key in that example.

I also came across this chart on Linkedin which explains the same concept even more visually.

Read more