Member-only story
Data Science, Machine Learning
The Beginners’ Guide to the ROC Curve and AUC
Understand what is ROC Curve and AUC, how to works, and how to use it for classification problems

In the previous article here, you have understood classification evaluation metrics such as Accuracy, Precision, Recall, F1-Score, etc. In this article, we will go through another important evaluation metric AUC-ROC score.
What is AUC-ROC
ROC curve (Receiver Operating Characteristic curve) is a graph showing the performance of a classification model at different probability thresholds.
ROC graph is created by plotting FPR Vs. TPR where FPR (False Positive Rate) is plotted on the x-axis and TPR (True Positive Rate) is plotted on the y-axis for different probability threshold values ranging from 0.0 to 1.0.
True Positive Rate (TPR) refers to the ratio of correctly predicted positive labels from all the positive labels.
False Positive Rate (FPR) refers to the ratio of incorrectly predicted positive labels from all the negative labels.
AUC stands for Area under the ROC Curve. It measures the entire two-dimensional area underneath the entire ROC curve from (0,0) to (1,1).