Conversational AI: human interaction and the (rocky?) road towards empathy

Conversational AI: human interaction and the (rocky?) road towards empathy

Alberto Maestri Published on 11/13/2024

Conversational AI is a type of artificial intelligence (AI) that can simulate human conversations. It is made possible by natural language processing (NLP), a field of AI that allows computers to understand and process human language, and by new models that underpin generative AI.

Conversational AI works by using a combination of natural language processing (NLP), foundation models and machine learning (ML).

Conversational AI systems are trained on enormous amounts of data, including text and speech. This data teaches the system how to understand and process human language. The system then uses this knowledge to interact with humans in a natural way. It constantly learns from these interactions, improving the quality of its responses over time.

Google Cloud

As this definition from Google Cloud shows, conversational AI offers the potential for interacting with and engaging the customers of any business, of any size, in any sector. Conversational AI can recognise any type of voice or text input, and then mimic human interactions by understanding and responding to questions asked in natural languages.

This technology promises significant benefits to businesses and professionals who deploy it in their marketing, communications, CRM and customer experience operations. Here are a few examples:

  • Lower costs and higher productivity through automation
  • Fewer human errors for certain activities
  • More engaging, interactive and holistic user experiences
  • Always-on, 24/7 customer service, even when human agents are unavailable
  • Greater accessibility: conversational AI can also be used to improve access for disabled users, as well as for people with limited technical knowledge or different linguistic backgrounds.

These are just some of the many potential upsides of conversational AI, but the technology is not withoutits downsides.

The first relates to what can be called human-machine interaction.

It’s an area that should give pause for thought as the deployment of conversational AI gathers pace.

The uncanny valley of conversational AI

To help us understand the challenge posed by human-machine interaction, we’re going to revisit a theory that we’ve already explored on this blog

The uncanny valley theory was devised and developed by Masahiro Mori in 1970. It holds that as robots take on more and more human characteristics, our empathy towards them will grow, eliciting a positive reaction.

But we will eventually reach a point where our feelings about this humanoid change to repulsion or aversion because of how eerily it resembles an actual human being. Mori called this stage the “uncanny valley”.

If we apply the uncanny valley theory to conversational AI, a time will come when its human likeness will generate revulsion. For example, the shock at speaking to a robotic-sounding callbot on the other end of the line, rather than a real human.

This will have the paradoxical effect of distancing the person from the conversational AI system, with negative repercussions for the company and its brand. These will be felt all the more keenly if significant amounts of time and money have been invested in rolling out the system.

AI needs empathy too

The crux of the matter lies in the empathetic balance that we strike when designing AI systems.

But is it possible to see empathy in AI?

Before we go any further, we first need to understand the spectrum of empathy as it applies to designing systems and ways of interacting with them.

  • Pity and sympathy require a minimal effort at understanding the user
  • Empathy and compassion demand active engagement that can bring about positive change

Empathy has three main drivers:

  • Congruence of feelings: someone who feels empathy must put themselves in the shoes of the other person. This distinguishes empathy from a just a rational understanding of another person’s emotions.
  • Asymmetry: someone who feels empathy only experiences it because they have the same emotion as the other person, but this emotion is more applicable to the other person’s situation than to their own.
  • Awareness of the other: there must be a basic awareness that empathy has to do with the feelings of another person.

With this in mind, it’s clear that AI isn’t an empathetic entity; rather, it recognises the emotions of others using parameters or metrics (like facial expression). That said, the idea of empathetic AI has been around for some time now, underlining how artificial intelligence is increasingly able to show empathy towards us in our interactions with it.

Just think of how fluently ChatGPT responds to our prompts, often in ways that raise a smile or spark intrigue.

Empathy, artificial intelligence and branding: the recipe for successful interaction?

To fully understand both the potential and challenges of using AI to optimise brand conversations, it helps to first take a step back and think in terms of classical brand management theory.

Traditional branding theory holds that it is fundamental for a brand to identify its archetype and use this to guide everything it does in terms of product, marketing and communication. Every archetype has essential traits, and perceptions of these have to be managed to prevent counter-narratives developing.

For example, if a brand is positioned as a ruler (like Bibendum [the Michelin Man] or Jean-Paul Gaultier’s La Femme and Le Male fragrances), a seemingly positive narrative highlighting its creativity could cause confusion about its identity.

Do brand archetypes still apply in the age of artificial intelligence?

Undoubtedly. Above all because they are the first point of contact, the public face of the company. Chiara Longoni of Boston University and Luca Cian of the University of Virginia published a fascinating study in the Journal of Marketing  in which they observe a word-of-machine effect: that is to say, situations in which humans prefer AI-generated recommendations to those provided by other humans.

  • The researchers found that when we want to achieve a utilitarian objective or are focused on the functional features of what we’re buying (if, say, we have to buy a dishwasher), we tend to trust the recommendations of machines more.
  • However, when experiential factors or hedonic aspects like taste and smell come into play (if buying wines or fragrances, for example), advice from artificial intelligence alone is not enough, and must be supplemented with human input for it to be trusted. A perfect case study is Stitch Fix, a personal styling service that combines AI-driven and human-driven recommendations.

But artificial intelligence now goes further: not only can it offer marketers more options for supporting customers, it can do so while embodying a specific brand archetype, right down to its unique tone of voice. It’s something that branding professionals should take seriously. Indeed, we already have AI assistants who, thanks to their seeming ability to show empathy and affinity, we’ve embraced in our everyday lives. And researchers are striving to create machines that can joke and play: Siri already gets jealous if you accidentally call her by a rival AI assistant’s name.

Brand personality in the age of AI

Artificial beings with strong personalities.

According to Ben Essen (Global Chief Strategy Officer at London-based advertising agency Iris):

“Building an AI personality isn’t only a problem for Google, Apple and Amazon. It’s a problem any brand who wants to continue to communicate directly with their customers will have to face in the age of the conversational user interfaces.”

Ben Essen

A revolutionary but risky change.

Essen argues that “it will become impossible to pass challenging conversations up to ‘head office’ to be dealt with one at a time — AI technology will need to fend for itself in challenging, unpredictable and unprecedented scenarios.”

But how? There are different approaches, but they all follow the same core principle summed up by Designit in their article Getting to Know You: Designing Trustworthy Artificial Personalities.

Human beings act differently depending on whether they’re, say, hanging out with friends or attending a job interview.To gain our trust, AI personalities will have to be able to understand context in the same way. Because conversations can be ambiguous, and everyone is telling a different story.

Designit

Oren Jacob, a 20-year veteran of Pixar who is now an Engineering Lead at Apple, uses this great metaphor:

“One can think of computer conversation kind of like interactive screenwriting. We are writing lines 1, 3, 5, and 7 and then, oddly, we have no control whatsoever over what comes back in lines 2, 4, 6, and 8.” We all know when we reach the limits of our knowledge and have to give up. But what will AI-powered brands do? Will they say ‘I dunno’? Will they look at Wikipedia or another information source? Or will they try to guess?

Oren Jacob at Designit

These are tricky problems, because the very image of a brand is at stake. The smallest mistake by a bot designed to seem as human as possible could cause just as much reputational damage as poor customer service provided by a human sales assistant in store, for example.

In the same way that mass customisation was perfect for an economy yet to be oriented towards and shaped by AI, artificial intelligence and machine learning enable brands to develop their personality, giving it the sophistication required to have meaningful and original conversations with people. These AIs will become personalities in their own right, with clear roles, just like humans; and archetypes will have to be augmented by other ‘shadow’ archetypes to make them even more human. AIs will have to be designed for a world that is VUCA: volatile, uncertain, complex and ambiguous. In short,conversational intelligence is going to rewrite the rules of marketing and brand storytelling. Are you ready?