Inducing 'empathy' in AI can heighten customer experience

CIOs and technology leaders should institutionalize empathy with AI platform for heightened customer experience, says a new study

Inducing 'empathy' in AI can heighten customer experience - CIO&Leader

Many businesses are turning to artificial intelligence (AI) such as digital assistants and chatbots to improve the customer experience. However, a new report released by Pegasystems indicates that consumers lack an understanding of how they can benefit from AI tools and systems.

The research indicates that empathy is the key to AI based interaction between brand and its customer. It also demands that CIOs and technology leaders should institutionalize empathy with AI platform for heightened customer experience.

Can AI replace real person

The study shows that despite the growing usage of AI technologies, consumers are more likely to trust a real person to help make decisions. Users trust only those AI tools that seek to incorporate empathy and ethical-decision making. As Dr. Rob Walker, vice president, decisioning and analytics at Pega, explains how empathy is the key ingredient of building trust between humans and technology.

“When it comes to seeking a personal loan from a bank, only 25% consumers (as per the study) trusted a decision made by an AI system,” he says, adding that at the same time, if a human expert intervenes on the regulatory processes before making the loan offer to an individual and does the follow up, it becomes more trustworthy.

“Consumers likely prefer speaking to people because they have a greater degree of trust in them and believe it’s possible to influence the decision, when that’s far from the case. What’s needed is the ability for AI systems to help companies make ethical decisions.”

Why firms should care?

The study highlights that there are serious trust issues with AI. Nearly half (40%) of respondents agreed that AI has the potential to improve the customer service of businesses they interact with. This is where firms got to care, as the lack of trust, as the research shows, can have a negative impact on the customer’s digital experience and ultimately, on brand’s reputation. Moreover, consumers are cynical about the companies they do business with, finds the study. 

Researchers believe, when teams that are considering implementing AI, give customers the opportunity to choose whether if they prefer an AI-based or human-driven experience to solve their inquiry – it turns out to be a step towards building trust and transparency.  

Despite this, 65% of respondents don’t trust that companies have their best interests at heart, raising significant questions about how much trust they have in the technology businesses use to interact with them. Many believe that AI is unable to make unbiased decisions: Over half (53%) of respondents said it’s possible for AI to show bias in the way it makes decisions.

People still prefer the human touch, with 70% of respondents still prefer to speak to a human than an AI system or a chatbot when dealing with customer service. Most believe that AI does not utilize morality or empathy. Over half (56%) of customers don’t believe it is possible to develop machines that behave morally. For example, assigning female genders to digital assistants such as Apple’s Siri and Amazon’s Alexa is helping entrench harmful gender biases, according to a new research released by UNESCO.

“Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’,” the report said, adding that technology firms were “staffed by overwhelmingly male engineering teams” and that “the subservience of digital voice assistants becomes especially concerning when these machines give deflecting, lacklustre or apologetic responses even to verbal sexual harassment.

AI demands new leadership from CIOs

The age of AI demands new leadership from CIOs – in the way they design, implement and regulate their AI strategies. Experts believe, in the coming years, CIOs will need to pay even closer attention to the potential for algorithmic bias in machine learning. Companies need to be transparent about the data they use to train machine-learning algorithms to ensure that these algorithms don’t end up perpetuating, for instance in areas such as racial or gender discrimination. 

CIOs will also need to think twice about giving too much decision-making authority to machines. They need to ensure that they understand and can explain the methods machines are using to make decisions. Some organizations for example, also have formed ethics committees to oversee questionable aspects of data use.

Another very effectively way CIOs can close the AI-customer gap is by embracing ‘design feeling.’ Design feeling aims to help developers focus on how their designs make users feel, rather than on seeing designs solely as a solution to a problem.

Danielle Krettek, founder of the Empathy Research Lab at Google in her blog mentions that businesses need to start to think about design feeling as the next iteration of design thinking, the now widely used methodology aimed at creating more innovative and "human-centered" design concepts. This is an area Google is focusing on in its innovation lab.

Similarly, with Pegasystems Customer Empathy Advisor, a new AI tool for optimal customer experience, CIOs can measure and calibrate the degree of empathy in customer interactions and improve user experience.

To ensure heightened customer engagement and experience, machines have to understand the humans that are using them. Soon there will be computers that can tell the difference between a smile and a smirk. Such level of sophistication can make their interactions with businesses better and more efficient, believe AI experts.

Air Jordan XIII Slippers


Add new comment