By Adriana Beal
If you’re a sales professional, you’ve probably heard people saying how AI (artificial intelligence) technology will tell you “the right buying audience, how to market to them, and when to call them — even down to the minute”. “If you’re not starting to operate this way”, AI solution providers warn, “you’re going to get outmaneuvered by the competition using this type of technology”.
Yet, I’m not seeing many companies reporting improvements in revenue and win rates as a result of the adoption of AI technology. What’s going on here?
A couple of years ago, I interviewed a sales manager of a luxury car dealership for a project I was working on. The sales manager was celebrating the fact that a policy issued by headquarters, stating that sales reps were to call prospects eight times in order to increase the dealership’s win rate, had just been removed.
The sales manager explained that the rule had been created after the executives at headquarters saw a report showing a correlation between a successful sale and an average of eight calls to the prospect. As the sales manager knew from intuition, the executives were making a classic mistake: confusing correlation with causation.
The policy caused sales to go down in the dealerships that complied with the new rule. The smartest ones decided to ignore the rule, which made it easier to show how misguided the approach was by enabling a comparison of results. Quickly the executives realized the approach was having a negative impact, and retired the directive. The conclusion was that the aggressive sales tactic turned off prospects who were considering buying a car from that brand and would have closed the deal if allowed to follow their decision process without being subjected to so many insistent calls.
It’s not surprising to see this type of misguided use of predictive analytics causing sellers to fail to achieve the expected results from the adoption of AI sales tools. It’s not that these tools can’t deliver value to sales organizations, but it’s necessary to first understand their limitations and ensure that the selected tool has a solid framework for evaluating data in a scientific way.
To reuse an example from the book Everydata: The Misinformation Hidden in the Little Data You Consume Every Day, if every Monday morning your dog starts barking, and a few minutes later, the garbage truck arrives, it would be an obvious mistake to assume that your dog’s bark causes the garbage truck to come. In this case, it is likely that the causality is reversed: your dog just hears the garbage truck before you do. Few people would make this exact mistake, and yet they make similar mistakes in their decision making on a daily basis, as the luxury car executives did.
Artificial intelligence tools can analyze data from sales calls and use things like talk-to-listen ratio, number and kinds of questions, etc. to uncover valuable insights such as “open-ended questions increase our win rate, while too many factual questions asked in quick succession tend to shut down the prospect and reduce our win rate”. However, for more complex applications, such as predicting which deals are more likely to close, things become much tricker.
Imagine that a seller had one deal closed in July, and one lost. Both have the exact same profile in the company’s CRM, from industry and size of the prospect to number of calls and meetings it took until a decision was made, distance in-between calls, how soon decision-makers got involved in the opportunity, etc. Based on the available data, it would have been impossible for a predictive model to determine that one of these deals would close and the other wouldn’t.
The core problem here is omitted variables. The seller collecting prospect interaction data in its CRM only had access to half the story. There was little or no visibility into what was happening on the buyer side. Unbeknownst to the seller, in the lost deal the company was going through a change in leadership. The decision of going with a different competitors was caused by a new executive with strong relationship with the CEO of the chosen provider influencing the final choice. Several variables affecting the decision of the prospects had different values could not be taken into account in order to accurately predict the different outcomes.
In this scenario, even if a company was in possession of a sizable number of records about the outcome of past deals–hundreds of thousands of deal records to train and test a machine learning model with–, it wouldn’t be able to successfully and accurately predict the likelihood of future deals successfully closing. The missing relevant predictor variables would prevent an algorithm from generating an accurate forecast.
The lesson here is simple: evaluate AI tools with care before deciding to adopt one. Pay attention to what the solution provider is saying–and what he is not saying. Ask to see some case studies or demos that show the kinds of insights the tool provides. Does the tool make a jump from correlation to causation based on preconceptions rather than actual evidence? Do the relationships identified by the tool actually make intuitive sense? Why should eight calls to prospects improve your conversion rates? What variables are being used to produce a prediction–are there other omitted factors that could actually be crucial for an accurate forecast that the tool is claiming to be able to produce?