Understanding the F1 Score

Ellie Frank
3 min readApr 29, 2023

--

When it comes to evaluating the performance of a machine learning model terms like accuracy may be familiar, however, there is another important metric that data scientists use to measure a model’s effectiveness: the F1 score. If you’re new to data science you may not have heard of this term before, but it’s a critical measure that can help you make informed decisions about which models to use in your work.

The F1 score combines a model’s precision and recall into a single performance metric that gives equal weight to both measures. To quickly review, precision is the proportion of true positives among all the predicted positive cases and recall is the proportion of true positives among all actual positive cases. To learn more about precision and recall, check out this additional article.

The F1 score is the harmonic mean of a model’s precision and recall. A harmonic mean is a type of average used in mathematics and statistics. It is calculated by summing the reciprocal of each value in a data set and then dividing the number of values in the dataset by said sum. Unlike other types of means — such as the arithmetic mean that you’re probably familiar with from math class — the harmonic mean gives more weight to smaller values in the dataset. This can be useful in situations where outliers or extreme values can skew the results. In the context of the F1 score, the harmonic mean is used to balance the precision and recall metrics.

Where n is the number of values.

When we put precision and recall into the harmonic mean equation, we get the first F1 score formula shown below. But sometimes you might come across the second F1 score formula. Don’t freak out, they both give you the same result — they’re just different ways of writing it down.

The F1 score will always be a value between 0 and 1. This is because a harmonic mean is always within the range of 0 and 1. As the values in the set move closer to 0, the harmonic mean approaches 0, and as they move closer to 1, the harmonic mean approaches 1. So, the F1 score — the harmonic mean of precision and recall — will also always be between 0 and 1.

An F1 score closer to 1 indicates high precision and recall for a model, which is ideal. However, achieving an F1 score of 1 is rare in real-world scenarios, and it may suggest overfitting to the training data. It is essential to evaluate the model’s performance on a separate test dataset to ensure its generalizability.

On the other hand, an F1 score closer to 0 indicates poor precision and recall. A low F1 score indicates that you may need to adjust the model’s parameters or consider using a different algorithm or approach. By calculating the F1 score, we can get a more complete understanding of a model’s performance.

In conclusion, the F1 score is a valuable metric for evaluating the performance of classification models. It strikes a balance between precision and recall and is always between 0 and 1, making it easy to interpret. By understanding how to calculate and interpret the F1 score, you can gain a more complete understanding of your model’s abilities and make adjustments as necessary to improve its performance. Remember, while the F1 score is an important metric, it should be used in conjunction with other evaluation methods to get a comprehensive view of your model’s performance.

--

--