From technology giants like Google to major management consultants like McKinsey, a rapidly growing number of companies preach an “AI-first” strategy. In essence, this means considering AI as the ultimate strategic priority, one that precedes other alternative directions. At first glance, this strategy seems logical, perhaps even inevitable. The figures speak for themselves: the sheer volume of investment flowing into AI technologies shows the confidence levels in an increasingly AI-driven future.

 

But could this approach be a major strategic misstep, the very reason undermining AI transformation initiatives?

 

As organizations increasingly prioritize AI above and over everything else, they risk forgetting that technology’s primary purpose is to solve problems. An AI-first approach could rapidly drive AI deployment across business operations, not because it solves “real” organizational or customer problems, but because AI implementation becomes an end in itself. The likely outcome is a lot of AI solutions in search of problems, or worse, solutions that create new problems.

 

When AI-First Goes Wrong

 

Consider the case of Uber’s reported use of AI-generated images on their delivery app. Some of the AI-generated food images were low-quality, irrelevant, or even absurd. But the main problem is that they fail to address the primary consumer need in this context: an authentic visual representation of the food that they consider ordering. Instead, they create additional issues by misleading consumers and setting unrealistic expectations.

 

Even when AI is applied to real problems, an AI-first approach could be misguided. AI solutions, at least the existing ones, do not help with all problems and could even be counterproductive in certain cases. Take the results of an experiment at Boston Consulting Group for example. While consultants working with GenAI showed significant efficiency and effectiveness gains overall, their performance was not uniformly better across all tasks — it hindered performance in tasks that were deemed “outside the frontier” for GenAI.

 

An AI-first approach could also prompt the adoption of flashy applications without getting the core IT infrastructure ready. A recent survey by Equinix shows that a significant portion of IT managers (42%) lack confidence their infrastructure can handle AI demands, even as 85% already deploy or plan to deploy it. Put simply, if a core system, say payroll, has fundamental flaws, investing in an intelligent payroll chatbot before streamlining critical software will not be of use. Likewise, most companies do not have the required data capabilities without which even the most impressive AI applications will be premature.

 

Embracing an AI-first approach also sends a clear, subliminal message to employees, one that is not entirely motivating: If AI is first, they are at best second. This would likely exacerbate their existing concerns about AI taking their jobs — a fear not unfounded given anecdotal evidence and predictions about AI-driven job replacement. As a result, employees will likely be even less committed to AI initiatives. And, as we know from academic research, without such employee buy-in a successful transformation is highly unlikely, if not outright impossible.

 

Aside from employee disengagement, deployment of AI can have a range of unintended behavioral consequences. Consider a recent study examining employee reactions to algorithmic management in tasks like performance evaluations. The deployment of algorithms reduced employees’ motivation to help others, as they began to view their colleagues more as objects than as human beings. Thus, even if these algorithms prove effective, they could have far-reaching impacts, potentially eroding the very fabric of organizational culture.

 

Similarly, an AI-first approach could backfire when considering consumer responses. In contexts where AI overtakes roles traditionally defined by human qualities — like customer service chatbots — consumer reactance can be a significant hurdle. Overcoming this is possible by, for example, emphasizing the human element in AI applications. Yet, an uncompromising pursuit of AI could diverge attention away from understanding these subtle yet important behavioral consequences.

 

A primarily tech-focused approach is also riddled with risks given the plethora of ethical dilemmas and legal ambiguities surrounding AI. What would be the compass of a manager in a situation when they need to choose between AI implementation and ethical principles? Take Amazon’s experience as a case in point. The company once developed an AI-powered CV-screening tool, only to discover it was sexist. Ultimately, Amazon scrapped the tool, but therein lies a critical question: would managers consistently make the right decision when AI is prioritized over everything else? It is all too easy to come up with examples from recent history where principles were sidelined, leading to grave consequences — from manipulating voter behavior to harming teen mental health.

 

To be clear, the problem with an AI-first strategy lies not within the “AI” component but with the “first” aspect; it is about how organizational focus is directed. An AI-first approach can be myopic, potentially leading us to overlook the true purpose of technology: to serve and enhance human endeavors.

 

A Balanced Approach to AI

 

A different, more balanced, and thoughtful approach to AI transformation is not only possible but also likely more effective. Instead of embracing an AI-first strategy, I recommend organizations keep the 3Ps at the forefront of their AI transformation: problemcentric, peoplefirst, and principledriven. The core premise of this approach is harnessing the potential of AI, without losing sight of the organizational objectives, the human side of technology, and core values.

 

Problem-centric.

 

Start with the problem, not the technology. Consider how AI can be used to achieve strategic objectives and tackle organizational challenges in more efficient, effective, or innovative ways. For example, a retailer could analyze service logs and complaints to develop targeted AI solutions addressing customer pain points instead of launching an AI customer service chatbot just because everyone else is. Similarly, a marketing team should first understand the brand voice and target audience before exploring the use of text-to-image AI to generate an edgy ad copy.

 

People-first.

 

Prioritize humans over AI. Place your core emphasis on how AI can empower humans. This calls for a proactive effort to understand what AI means for employees and customers, open communication with them, and consideration of broader behavioral consequences. For example, a people-first approach would require shifting focus from just efficiency or performance gains in a task to considering how AI can make jobs more intrinsically motivating by automating undesirable tasks. Likewise, an HR team could develop a comprehensive training program focusing on how GenAI tools can augment current job responsibilities while also easing fears around replacement.

 

Principle-driven.

 

Reflect on the ethical and legal aspects of AI deployment to articulate your ethical stance on AI. Consider aspects such as fairness, bias, privacy, and transparency. Establish a clear policy to guide AI implementation within the organization, ensuring projects align with overarching values. For instance, one such principle could be the inclusion of humans in the loop for consequential decisions like hiring and promotion. Likewise, when sourcing new AI partners, a procurement team could ask vendors to include how their models minimize bias and protect user privacy in their request for proposals.

 

Given AI’s tremendous potential, the rush to integrate it into every facet of business operations is understandable. However, ironically, by putting AI ahead of everything else, organizations might be setting themselves up for failure. True success with AI transformation may well be more attainable for those who put strategy, humans, and principles before AI.

Source: Is Your AI-First Strategy Causing More Problems Than It’s Solving?