Why am I not getting the expected benefits from using AI in marketing?

Added on:
11 April 2024

Interest in the topic of artificial intelligence (AI) is not waning. Almost every day there are announcements of further breakthroughs in this field. The public is most excited about the achievements of so-called generative AI. They are the most spectacular and most impressive. Talking to a computer in natural language, a computer painting pictures or creating a movie based on a script given to it appeal to the imagination. However, it is worth remembering that AI is also more mundane models that, operating in the background and without attracting as much attention, play an important role in many business processes including marketing. Sometimes, however, they do not deliver the expected benefits and their performance is sometimes disappointing. What mistakes can contribute to such situations and what can be done to avoid them?

Problem worded incorrectly

Sometimes a fundamental problem arises at the very beginning of an AI project. It is a clear disconnect between the actual business need and the definition of the problem the AI team sets out to solve. For example, a marketing team has a goal of reducing the number of departing customers. It intends to use a special action involving coupons with attractive discounts to do so. Naturally, the budget for this activity is limited. As a first step, the team wants to identify the customers most at risk of churn.

So he commissions the AI cell to develop a model that estimates the probability of leaving for each customer. Team AI does its job brilliantly. It builds a model with very high prediction accuracy. So the marketing department decides to use the model and qualify those with the highest probability of leaving for action until the budget is exhausted. The action happens. Quite a few at-risk consumers stay. Everyone has the feeling of a job well done and a budget reasonably used. But was the budget really used optimally? Could something have been done better? It turns out that yes.

Instead of the probability of leaving, one could predict the chance of a positive reaction to the action. A seemingly minor difference. However, it could yield dramatically better results. The variant used qualified those most at risk. Among them, however, were people who could not be persuaded to stay by any action. These are very often precisely the people at the top of the at-risk list – frustrated with customer service, disappointed with the quality of the product, already looking for an alternative supplier for some time. Using the budget for these consumers, slightly lower (but still high) risk consumers were left out, who were nevertheless more likely to change their decision thanks to the action. A certain portion of the budget was wasted on trying to convince those who could not be convinced. At the same time, the opportunity to convince those whose decision could still be influenced was missed. A better, more precise definition of the problem in the context of the expected business effect would have made it possible to benefit much more from the opportunities offered by the use of AI algorithms.

Inappropriate measures of success

In many situations, adopting the wrong indicators can lead to wrong decisions. They are also sometimes associated with giving up tools that. One thriving Polish company had a custom-built AI predictive system that allowed for personalized recommendation of an offer to be included in a mailing. The company had a very broad product portfolio and many competing offers. The system created was to select a communication that was relevant to the consumer and at the same time maximized the possible profit. It was also a matter of not bombarding the consumer with too many messages. The main concern was that “spammed” customers would opt out of receiving mailings. It boasted that the “unsubscribe” rate remained very low. Under intense pressure of sales performance, however, they began to see limiting the number of messages as an obstacle to achieving goals. Managers assumed that increasing the number of messages sent would bring more sales. They were only concerned about increased unsubscription rates.

A quasi-experiment was conducted to increase the number of messages while observing sales and quit rates. Sales increased and an increase in the abandonment rate was not observed. This encouraged further increases in the number of messages until the maintenance of the aforementioned AI tool was abandoned altogether. The company thus took a step backward. The model was discarded in favor of “expert” qualification of consumers for communications. The churn rate, which remained stable, kept decision-makers convinced that the number of mailings, of course, if it remained within, as they put it, “the limits of common sense,” did not discourage customers from subscribing. The voices of the data science team, which tried to convince them to take a broader view of the problem, were ignored. A schoolboy mistake was made.

Managers ignored the apparent downward trend in the open rate. The advanced model was discarded and considered an unnecessary cost. They failed to consider that consumers may be saturated to the point where they start ignoring messages from that sender. They stop opening them, and as a result, they also don’t care to click on the “unsubscribe me” link. The possibility of maintaining a long-term relationship, and the possibility of generating profits from the communication in the future, has been sacrificed for the short-term sales effect.

Mistakes in communication between marketing and AI teams

The common denominator of the two situations cited earlier is, in fact, the lack of adequate communication between the marketing team and the AI team. In the first case, more information could have been communicated to the AI team regarding the business objective and context (including budget constraints) of the project. This would have provided an opportunity for a more adequate definition of the problem and a fuller exploitation of the possibilities offered by advanced modeling. Consequently, this would have translated into better budget utilization and higher ROI. In the second case, more weight should have been given to the concerns raised by the AI team about the definition of the problem and the measure adopted. This would have avoided, costly in the long run, the wrong decision to return to old methods and reject the potential of AI.

The success of the project and the full realization of the AI opportunity requires good communication and interaction between marketing experts and AI experts. Avoiding the following mistakes can help achieve this:

  • too broadly defined business objective (“we want to reduce the number of departing customers” is several levels of detail too few),

  • vague definitions of fundamental concepts (sometimes it is a challenge to define what it means that a customer has left),

  • failure to define the context and actual business objective in the brief given to the AI team,

  • concealment by the marketing team from the AI team of deficiencies in understanding the specifics and capabilities of AI solutions,

  • related excessive expectations of the project’s results, or recognition in advance that AI cannot help solve a given marketing problem,

  • the AI team’s concealment from the marketing team of deficiencies in understanding of marketing issues and the project context,

  • the related limitation of the AI team to a literal interpretation of the brief provided,

  • The use of industry “newspeak”,

  • excessive focus on technicalities at the expense of the AI team’s loss of business perspective and the real purpose of the project.


The examples cited in the article are just the tip of the iceberg. Some may see both situations as simple, even schoolboy mistakes. That’s fine. It means that they are already at a higher level of understanding of the specifics of working with AI projects. However, there are pitfalls lurking there as well. For others, even these two cited examples may be eye-opening, make them reflect and look for similar problems in their own projects. That’s a good thing, too. It means they are taking another important step on the road to more fully realizing the potential that lies in marketing applications of AI. In any situation, it’s important to remember that good communication and cooperation between the data science/AI team and the marketing team is needed to apply AI successfully.