Nearshore Americas
AI hallucination

AI Hallucinations are a Growing Problem for BPOs

Recently, a chatbot at a mid-sized European retail bank sent customers messages saying they could take more instalments to repay their loan and that the bank would not increase the interest rate. It was, in fact, a false message that the bot was proactively dispensing.

By the time the bank fixed it, the damage had already been done. Many customers ignored it, but a few reportedly started spending more, assuming their financial burden had reduced. The incident has since been widely cited in industry discussions, prompting regulators to push for stronger human oversight of sensitive AI-driven communications.

AI-generated hallucinations of this kind are now keeping BPOs on high alert, forcing many to hire more people to re-check the bot’s work.

Although most BPOs claim to have saved significant time as well as money by integrating AI into their operations, such claims do not appear to be entirely accurate.

“The headline numbers look great, but the real story is more complicated,” says Aravind Chandramouli, Head of AI Center of Excellence at Tredence. “For every 10 hours of efficiency gained through AI, nearly four hours are lost correcting, clarifying, or rewriting AI-generated content,” he said, citing recent studies.

“According to research by Workday, only 14% of workers consistently achieve net-positive outcomes from AI use once rework is accounted for.”

AI hallucinates between 3% and 27% of the time, according to studies from Stanford and MIT. For a BPO handling 10,000 AI-assisted interactions per day, that translates into about 300 conversations daily carrying incorrect information.

“AI is making work faster, but not necessarily better. Most organizations are measuring output speed, while the real cost is hidden in rework, errors, and downstream corrections,” says Gaurav Sharma, a product leader at ServiceNow.

Brands cannot brush off such errors easily. In 2025, the Australian government asked Deloitte to refund part of the payment for a commissioned report after finding that some citations did not exist. The firm acknowledged issues with the report and agreed to a partial refund. In 2024, Air Canada attempted to attribute a misleading customer message to its chatbot. However, the court rejected the argument and ordered the airline to honor the fare quoted by the bot.

Andrew Trimboli, Founder & Principal Consultant at Faro CX & Content Consultancy, blames the rising number of hallucination cases on the rush among enterprises to cut costs and please their clients.

“Most BPOs sold AI on average-handle-time and deflection rates, because that’s what the contract was priced on.”

“The brand asked for cost reduction and didn’t define what good looked like,” he added. “The BPOs delivered to brief. If we want better outputs, the buying side has to write better briefs — which means CX leaders pushing back on procurement, not the other way around.”

Hire More Workers for Rework

The rework is now pushing BPOs to expand their workforce. According to a study by Comm100, an AI-powered omnichannel customer service platform, AI integration is freeing up time for workers in large companies, but in smaller BPOs it is adding more work. Teams of 6–10 agents saw workload increase by 1.6%, the report said.

Lacey Kaelani, CEO of Metaintro, a job search engine, said he is seeing a growing number of customer experience-related job descriptions now include “validating AI’s output” and “quality managing” as core responsibilities, meaning employees are increasingly doing oversight work rather than simply running automated processes.

First and foremost, BPOs must stop treating AI as a substitute for human employees, argues Chandramouli. “Basically, rather than automating everything, companies will need to augment intelligently.” He added that, unlike in factories, AI-generated errors are harder to detect in the services sector. “The services sector is pretty exposed, probably more than manufacturing or tech, because services run on judgment, trust, and human nuance.”

“Think about it. In a factory, a defective output is visible. In services—whether it is consulting, legal, financial advice, or customer support — poor-quality AI output can go undetected for much longer, quietly eroding client trust before anyone notices.”

Narayan Ammachchi

News Editor for Nearshore Americas, Narayan Ammachchi is a career journalist with a decade of experience in politics and international business. He works out of his base in the Indian Silicon City of Bangalore.

Add comment