Nearshore Americas
AI CEOs

Seven Perils Businesses Face in AI Implementation

The AI train is gaining momentum, and the pressure to jump on board grows more intense by the day. Unfortunately, the urgency of the situation might lead some companies to fall flat on their faces as they race against the machine.

There’s no shortage of perils faced by businesses who embark in today’s hottest tech adventure. From a misunderstanding of AI’s capabilities to a lack of proper data management, the potential missteps abound, and company leadership should at the very least be aware of such risks.

With that in mind, NSAM provides the following list of potential pitfalls faced by companies –whether third party providers or customers in search of a tech partner– who mean to jump into the increasingly relevant (and strange) landscape of AI applied to business.

Sci-Fi Dreaming

The potential of AI has been trumpeted to the point that anyone who’s unfamiliar with the technology would believe it to be limitless in its current capabilities; something akin to an electronic superbrain imagined by Asimov.

The state of AI has yet to catch up to science fiction, something that businesses are finding out in their attempts to adapt the technology to their internal processes or their market solutions. 

“Where it [AI implementation] goes wrong is when brands try to deliver AI to the wrong experiences,” commented Monti Becker Kelly, Webhelp’s Senior VP of Global Accounts.

Some brands have tried to adapt current AI models to complex customer needs, with results being quite underwhelming, Monti explained. 

“Where it [AI implementation] goes wrong is when brands try to deliver AI to the wrong experiences”—Monti Becker Kelly, Webhelp’s Senior VP of Global Accounts.

Teleperformance broke it down in the presentation of its latest financial results, labeling the applications of AI to CX as the “easy” ones (tier-1 support, chat, email,. Rule-based back office processing and non-critical translation) and the “difficult” ones (complex customer support, sales, trust & safety, judgment-based back-offices processes and critical translation).

Businesses find themselves in a mad scramble to unlock the potential of AI and apply it to their particular industries. This can bring tremendous pressures which easily lead to the “oversimplification” of what is requested of AI, pointed out Was Raham, CEO of AI research firm Prescience, in a Medium post.

“Is usually possible to distill a project to one overarching goal and associated AI ‘big question(s).’ But as you drop into detail, the complexity can become daunting,” he wrote.

Who’s the Captain?

With the high expectations imposed on AI technology, the question arises: who should helm its implementation into company processes and solutions? 

Discourse seems to be pointing to the tech-oriented side of the C-suite (CIOs, CTOs, CDOs) for guidance, given their credentials. CEOs have been mentioned too due to their position as the de facto captains of the company.  Even CFOs have been thrown into the conversation, considering the financial muscle that will be required for the AI journey.

Joe Procopio, one of the minds behind the first natural language generation (NLG) tool to make it to market, put his money on CPOs (like himself), though he didn’t feel “super-strongly” about that choice.

“I think the business case and the ROI are the most important factors in terms of adoption,” he commented in an interview with NSAM. “If your CPO can’t figure out how to create viable revenue streams out of automated content, they’re probably the ones to tell you whether or not you should build around it [AI] and what that implementation should look like.”

While AI implementation will, in practice, be a collaborative effort, someone will have to don the big hat. A single vision –even if it is fed by a multitude of others– will put the company on a definite path, away from the immensity of the forest.

No Quantification

Though AI technology has been around for years, businesses have yet to figure out how to quantify its impact on their operations. The fact was flagged by Gartner in a survey last year: about a fifth of respondents pointed out the inability to measure the value of AI and a lack of understanding of its benefits and uses as the top hurdles for implementation.  

“People think that AI is the silver bullet that’s gonna fix everything, and that’s never the case, and it never has been”, commented Dan McLean, Senior VP of Business Strategy at Capmation, in an interview. “You have to understand your business problem and what you’re trying to solve.”

Generative AI’s arresting arrival has put companies in that same situation. AI’s potential is heralded everywhere, but “numbers aren’t mentioned very often [which] leads to a dilemma for businesses spending on AI”, Was Raham pointed out.

“People think that AI is the silver bullet that’s gonna fix everything, and that’s never the case, and it never has been”—Dan McLean, Senior VP of Business Strategy at Capmation

Joe Procopio echoed that sentiment when speaking with NSAM. For him, AI’s virtually limitless potential “doesn’t mean those use cases are all economically viable or even materially useful.”

Don’t Overhaul; Integrate

AI’s arrival feels like a revolution in business. Radical change is expected, and leadership teams might feel pulled to overhaul their operations.

Nevertheless, not every application of AI must result in a complete makeover. It’s likely that early use cases will look more like plug-ins or integrations rather than shiny, new toys for proud display in the storefront.

“That’s where we’re seeing the most traction, with our integration capabilities,” said Webhel’s Becker Kelly. “You have this tool; we can accelerate these areas; we can help them determine what’s the best use for their current architecture and where we can add value. We believe in leveraging our clients’ current investments and that makes our solutions even more attractive” 

Building a Proper AI Team

Tech hiring is difficult enough as it is. For the purpose of AI development and implementation, building a team brings an extra layer of challenge for leadership. 

As stated before, businesses have yet to figure out the true capacity and value of AI in general (and G-AI in particular). Without that guidance, hiring becomes dangerously close to guesswork.

R&D-dependent businesses are turning more towards outsourcing to get the tech expertise needed, and AI professionals are already positioned among the most sought-after profiles. But even when depending on third party partnerships, organizations will need their own in-house expertise. 

“If you plan to use third parties for AI work, you’ll need in-house AI skills to commission and manage work. You’ll also need them to assess those you pay to do that work,” wrote Raham, adding that these pitfalls could be mitigated by focusing on “broader AI knowledge for your initial hires.” This would avoid overspecialization before being sure about what skills will be needed for the AI journey.

“I think the business case and the ROI are the most important factors in terms of adoption”—Joe Procopio, CPO at Get Spiffy

Data Dieting

The performance of AI tools depends on their data diet. Servings must be substantial and frequent, but priority should be given to their nutritional value.

“Although it will take the AI system longer if datasets are shorter in nature, you will have some guarantee that your output will be robust and relevant,” wrote Mikaela Pisani, Head of ML at Uruguayan developer Rootstrap. “It’s not productive to feed an AI system lots of data just for the hope that it will learn something from it.”

Organizations should also label their data properly to make sure that the correct inputs are being fed into the machine. In turn, there should be clarity about the purpose of feeding the AI with those specific data sets.

“Companies generally know that value is buried in their data, but they often neglect to check the quality of the data or validate the use cases with data,” wrote Dwijendra Dwivedi, AI and Analytics Team Leader at business analytics firm SAS. “It is essential to define the use cases required and then develop the necessary data to support them.”

Careless Handling

The tsunami of enthusiasm for G-AI has been responded to with the sounds of alarm. From people long-involved with the technology itself, to representatives of the leading firms in the AI race, there has been no shortage of concern for the security (and even existential) implications of G-AI. In a recent Salesforce survey, 71% of the 500 senior IT leaders consulted said they expect G-AI to “introduce new security risks to our data,”. Also, 65% recognized that they “can’t justify the implementation of generative AI at the moment.” 

Business leaders, activists and politicians are swiftly waking up to the potential privacy risks of unregulated, poorly thought out implementation of the technology. Major financial firms are cracking down on its use among employees, and academics have described ChatGPT in particular as a “data privacy nightmare.”

“As chatbots regularly gather personal data, concerns regarding data privacy have surfaced,” Terence Jackson, Microsoft’s Chief Security Advisor, warned in a Forbes article. “Furthermore, AI generation provides a novel extension of the entire attack surface, introducing new attack vectors that hackers may exploit.” 

Sign up for our Nearshore Americas newsletter:


Though warnings are traveling the airwaves, businesses are still racing to the top, and some might give the proper amount of thought to security risks in their mad climb.

BPO firms in particular have been drifting into more dangerous waters when it comes to data privacy. The lawsuits keep hitting, and they’ll keep hitting as cyber criminals build more sophisticated tools. Third party partners know they will have to step up their security game if they want to enter the AI minefield without it exploding in their faces and their customers’.

“Vendors of intelligent features will often introduce changes and updates without fanfare, exposing their customers to unexpected risks and vulnerabilities,” Dwijendra Dwived warned. “You must stay aware of model actions and ensure they continue operating as expected and required.”

Cesar Cantu

Cesar is the Managing Editor of Nearshore Americas. He's a journalist based in Mexico City, with experience covering foreign trade policy, agribusiness and the food industry in Mexico and Latin America.

1 comment

  • Those are excellent points, Cesar. I can relate to these challenges. Implementing AI technologies is a complex process that requires a deep understanding of the technology’s capabilities and limitations.

    Misunderstanding AI’s capabilities can lead to unrealistic expectations and potential disappointments. It’s crucial to clearly understand what AI can and cannot do and align this understanding with the company’s strategic objectives.

    The issue of data structuring and management is also a significant challenge. Many must prepare to ensure the organization’s data is clean, organized, and rightly classified for context. AI technologies rely heavily on data, and the quality of this data can significantly impact the effectiveness of AI solutions. Proper data management practices are essential to ensure the accuracy and reliability of AI outputs.

    Quantifying the impact of AI on operations is another challenge I’ve encountered, especially when aligning Nearshore Software Engineers to client business objectives more accurately and with more incredible velocity. AI can bring about significant improvements in efficiency and productivity, but measuring these improvements can be difficult. Establishing clear metrics and KPIs to assess the value of AI implementations is essential.

    The arrival of generative AI presents new challenges and opportunities, especially for the Nearshore Outsourcing Industry. We use AI to build and onboard nearshore software engineering teams at greater accuracy and velocity than traditional operations. Generative AI can create new content for various applications, such as text, images, and videos. However, it also introduces new complexities and potential risks, such as the potential for misuse and the technical complexity of managing generative AI models.