This article by Sailthru Chief Data Scientist, Jeremy Stanley, originally appeared on AdExchanger

Predicting the future is not easy.

Yet predictive algorithms are commonly criticized because they fail to perfectly foresee very rare events. That is because rare events are just that: uncommon and subtle. Asking a predictive algorithm to perfectly identify the 1% of consumers who will purchase a specific product is a wildly unrealistic expectation.

Keep in mind that, even with all the big data available today, these solutions are trying to predict human behavior. Humans are complicated, and there are billions of us all behaving in increasingly interconnected ways. Despite what Hollywood might lead you to believe, setting the expectation that an algorithm can predict our future behavior with anything near complete certainty is a fool’s errand.

So rather than looking to predictive algorithms to make definitive predictions, we should instead ask how much better is an algorithm at identifying these rare events than random guessing alone?

Consider this example of a predictive algorithm that identifies users likely to make a specific purchase for an ecommerce site. Group A, identified by an algorithm, represents 5% of consumers with a 20% average chance of purchasing a product. Group B is the other 95% of consumers with a 0.0001% chance of purchasing the same product. Random selection of 5% of users from the entire population would only generate a group with a 1% chance of purchasing, so in this example the predictive algorithm generates 20 times lift (20% / 1%) through its selection of the 5% most likely to purchase.

In other words, it found the 5% of users who are 20 times more likely to purchase than the average consumer, even though it’s still only going to be right two out of five times. With the right data and sciences, generating this kind of lift is well within the capabilities of an ecommerce predictive model.

You might critique this approach because it represents a low bar for a standard – the algorithm just had to beat random guessing. You might think that surely, a better measure would be to compare the algorithm against a human’s ability to identify these optimal users. In some domains where there are highly trained individuals who can analyze every single consumer with great care to predict outcomes, such as health care, that is a valid point.

However, in general, there are several key differences between algorithms and humans that land in favor of the use of algorithms:

Cognitive bias: Humans are horrible at making predictions. We are often blinded by an exhaustive list of cognitive biases, such as bandwagoning, self-serving bias, illusion of validity, stereotyping, the empathy effect and suggestibility.

Scalability: Let’s say an expert is able to make a meaningful prediction every half hour. However efficient or knowledgeable he or she may be, this is still unscalable when thousands or hundreds of thousands of predictions are required in complex ecommerce and media organizations.

Time to predictions: Training an expert can take years or even decades before they can make valuable predictions. With modern computing and algorithms, training an accurate predictive model can be done in minutes.

Self-awareness: Not only do algorithms provide meaningful lift in the accuracy of their predictions, but they can also tell us how certain they are, such as whether a particular consumer is within the segment with a 13% chance of purchasing. Humans are terrible at estimating how certain they are. We almost always overestimate our certainty by wide margins.

In the end, we need to better understand and have realistic expectations of the abilities of predictive algorithms. They can be an incredibly powerful tool. When combined with software automation and accurate user data, they can provide highly personalized experiences to users that are far superior to a one-size-fits-all or a human expert-curated approach.

Just because they aren’t perfect, we shouldn’t fool ourselves into thinking they aren’t incredibly valuable.

Follow Jeremy Stanley (@jeremystan), Marigold Engage by Sailthru (@sailthru) and AdExchanger (@adexchanger) on Twitter.