Why agentic AI projects fail, part 2: integrating tech, organization and business to drive impact

Authors: Jens Eriksvik

This piece is part of the Algorithma whitepaper series. This means it is a longer read that deep-dives into a specific topic, covering some of the more technical aspects of artificial intelligence.

The enterprise AI landscape of 2025 presents a striking paradox: despite unprecedented investment and adoption, a significant majority of AI initiatives fail to deliver their promised value. A prevailing executive blind spot is the root cause of this crisis: a fundamental misunderstanding that treats AI as a standalone technology deployment rather than a holistic systems transformation. By delegating AI to tactical-level teams and focusing on technology procurement over strategic impact, leaders are inadvertently creating the conditions for failure, leading to abandoned projects, exorbitant costs, and lost competitive advantage.

This pattern of failure is well-discussed [1]. The root issue isn't weak models, it's workflows that weren't built for teammates, but for tools, [2] and enterprise architectures optimized for standardization rather than the adaptive coordination that AI agents require.

"The high failure rate of AI is not an engineering problem, but a leadership one. Organizations that treat AI as technology procurement rather than organizational transformation inevitably find themselves with impressive demos that deliver no lasting value. Success requires  leaders to orchestrate three simultaneous transformations: evolving technical systems, adapting human capital, and reinventing business models."

Jens Eriksvik, Algorithma

In this article we present a  framework for leaders to navigate this complexity by leading across three simultaneous and interdependent transformations:

  • Technical system evolution: A shift from legacy IT infrastructure to a scalable, data-intelligent, and operationally mature AI foundation [3, 4].

  • Human system adaptation: A cultural and organizational pivot from human replacement to human augmentation, fostering trust, and redesigning roles for a new era of human-AI collaboration [5, 6].

  • Business system reinvention: A strategic reorientation of business models, governance, and performance metrics to capture the intangible and compounding value of AI beyond mere cost savings [7, 8, 9].

The evidence is clear: the high failure rate of AI is not an engineering problem, but a leadership crisis. Mastering these dimensions is no longer an option; it is a strategic imperative for building a future-proof organization.

Diagnosing the AI failure

Enterprise AI initiatives are stalled at an alarming rate, a pattern that points to a systemic breakdown in strategic execution. While the transformative potential of AI is widely acknowledged, the statistics on project failure are sobering. A 2025 survey found that 42% of companies abandoned most of their AI initiatives, a dramatic spike from just 17% in 2024. The average organization, according to the same survey, scrapped 46% of AI PoCs before they ever reached production.[10]

This high rate of abandonment is disproportionate to other technology projects. Analysis from the RAND Corporation confirms that over 80% of AI projects fail, which is more than double the failure rate of non-AI technology projects [11]. Other market data further reinforces this, with some analysts claiming as many as 85% of projects fall short of their goals [12] and only 25% of projects making it to production [13]. The consistent signal from these varying statistics is not the specific number itself, but the undeniable pattern of severe and widespread implementation failures.

The high frequency of projects getting trapped in what industry analysts call "pilot paralysis" highlights a critical flaw in traditional deployment models [1, 10]. Organizations launch a series of proof-of-concepts in isolated "sandboxes," often demonstrating a model's technical feasibility. The technology may work perfectly in this controlled environment, but the initiatives inevitably stall when it is time to scale for go-live. The path from prototype to production is rarely designed from the outset, leaving critical integration challenges, such as secure authentication, compliance workflows, and user training, unaddressed until the project's fate hangs in the balance. This failure to design for a clear path to production is a primary characteristic of a tactical, rather than a strategic, approach to AI. 

Worse still, this pattern often signals "AI-washing"; deploying AI initiatives for appearance rather than genuine business transformation. Organizations that treat AI as a technology procurement exercise, rather than a fundamental reimagining of how work gets done, inevitably find themselves with impressive demos that deliver no lasting value.

Root causes beyond the tech

The failure of enterprise AI initiatives is not only a result of technical shortcomings but a consequence of strategic misalignment. The single largest contributor to AI failure is the tendency to treat it as a "stand-alone IT project" rather than an integrated business transformation [1, 13] This approach creates a vacuum of leadership, as executives often delegate implementation to the IT or digital department, a practice that time and again proves to be a recipe for failure [14].

The delegation of AI to a single department leads directly to initiatives that are "disconnected" from the company's overall strategic objectives [13]. Without a clear, integrated vision, projects emerge in silos, lacking the shared success metrics or coordinated timelines necessary for cross-functional collaboration.This siloed approach also suffers from a critical lack of executive sponsorship. A 2024 PwC report found that nearly 65% of executives themselves believe their AI initiatives are unsuccessful due to this very reason. Without a champion to secure resources and enforce cross-functional coordination, a technically sound project becomes a disconnected "pilot" that fails to address the complex, real-world challenges of a full-scale deployment.[10] This does not mean that a company means a full-fledged all-comprehensive strategy, it means that AI needs to be part of a business transformation. 

This tactical-level focus also leads to "model fetishism," where engineering teams spend quarters optimizing technical metrics while the business case remains theoretical and critical integration tasks sit in the backlog. The disconnect between technical teams and business stakeholders means that when these projects are finally presented for review, they are not anchored in tangible value. The lack of ownership and alignment from the outset results in an environment where scaling becomes impossible and the likelihood of failure increases over time. This pattern illustrates that the problem is not a lack of technical prowess, but a failure of leadership.

Lessons from the AI graveyard

AI failures serve as cautionary tales that underscore the consequences of strategic and systemic missteps. These examples demonstrate that the stakes of AI deployment extend far beyond financial losses, impacting brand reputation, legal liability, and ethical standing.

The case of the Air Canada chatbot provides a clear lesson on the legal and reputational risks of delegating AI without oversight [15]. The airline's AI-powered chatbot provided a customer with incorrect information about bereavement fare refunds, which contradicted the company's official policy. The customer sued and won, with the court ruling that Air Canada was liable for the false information provided by its AI agent. This incident demonstrates that an AI is not an isolated tool but a legal extension of the business, and its outputs must be governed with the same rigor as any human employee [6].

Similarly, Amazon's AI recruiting tool, designed to streamline the hiring process, ended up discriminating against women [15]. The system was trained on a dataset of resumes that were overwhelmingly from male candidates, causing the AI to learn and perpetuate a bias against female applicants. This failure was a direct consequence of a flawed data strategy, illustrating that a company's historical biases can be amplified by an AI system if not properly addressed through a comprehensive governance framework. The failure here was not in the machine's ability to learn, but in the human-led process that fed it flawed data. [Read more on ML training here link]

These failures demonstrate that the consequences of a blind spot are not just limited to abandoned projects but can undermine a company's competitive position.

Evolving the tech platform for AI

The high rate of AI project failure at the production stage often stems from a fundamental mismatch between the demands of AI and an organization’s legacy technical infrastructure. Executives frequently fall into the trap of "technology fetishism," focusing on acquiring the latest models and tools without first building the foundational infrastructure to support them [15]. The pervasive "build-it-and-they-will-come" fallacy proves fatal when a sophisticated model cannot scale beyond its sandbox environment due to "infrastructure blind spots" and integration hurdles. [10, 17]

AI requires an infrastructure that is fundamentally different from that of traditional IT systems. The demands are relentless, including the need to handle massive computational workloads, support real-time data flows, and ensure seamless scalability.[17, 18] Traditional systems, which are often siloed and unable to handle the sheer volume and speed of data, create bottlenecks that limit scalability and prevent projects from moving to production. This has led to a strategic shift towards cloud-based or hybrid solutions that can provide the necessary computational resources and fluid scalability.[19]

“The prevailing tendency to delegate AI to IT departments while focusing on technology procurement over strategic integration creates the conditions for failure. Leaders must stop treating AI as a standalone project and start orchestrating it as the organizational transformation it truly is."

Peter Wahlgren, Algorithma

The evolution of the tech must be architected, not procured [20]. Leaders must invest in a foundation that is designed for AI from the ground up, embracing principles learned from high-performance computing environments [17]. This includes creating tightly aligned compute and storage, ensuring fault tolerance for long-running jobs, and building systems capable of real-time decision-making, such as Edge AI, which performs AI workloads at or near the data source. [21] The imperative is to move beyond simply buying new technology to re-architecting the entire technical ecosystem to support a scalable, data-driven future.

The data perspective

The fuel for any AI system is data, and its quality, availability, and governance are paramount to success. Research indicates that as many as 70% of companies cite low-quality data as a significant hindrance to their AI initiatives [13] A common failure pattern is the presence of "broken data ecosystems" where data is limited, fragmented, and under-used across disparate sources.[1, 12]

A tactical approach to AI often fails to recognize that the data lifecycle for AI is dynamic and non-linear [20]. Unlike traditional data management, AI requires data to be continuously ingested, filtered, labeled, transformed, and reused across multiple stages of model development and inference by different teams. Without a robust framework, this process can devolve into data chaos and "shadow AI". Enthusiastic teams, without a thought-through plan, create duplicate vector databases and orphaned GPU clusters/cloud environments, cannibalizing data quality and confusing governance efforts. [10]. Note however, the AI-journey should not be started with frameworks and governance, it should be started as an integrated business change, sponsored by a business leader/champion.

Effective data governance is not a compliance-driven afterthought; it is an architectural requirement for scaling AI [20]. By embedding governance into the technical architecture, organizations can move from a state of data chaos to one where collaboration can thrive without compromising performance or trust.

Operationalizing AI at scale with MLOps

A persistent theme in the analysis of AI failures is the inability to transition from a successful pilot to a full-scale production deployment. The primary reason for this "pilot paralysis" is the lack of a clear, repeatable, and governed path to production. This is where MLOps becomes a critical strategic capability.

MLOps is a set of practices that operationalize the AI lifecycle, bridging the gap between model development and production deployment [21]. It provides a framework for managing the entire AI workflow, from data preparation and feature engineering to model training, deployment, and continuous monitoring. MLOps represents the transition of AI from a bespoke, artisan craft to an industrialized, scalable, and reliable business process. It is the mechanism that ensures AI solutions are not only developed but also deployed, maintained, and improved efficiently and reliably.

Without MLOps, organizations face a number of critical challenges, including high costs, lack of scalability, and models that degrade in performance over time due to data drift. By adopting an MLOps framework, a company can automate the deployment of models, often reducing the time from weeks to minutes, and establishing a consistent and governed process for scaling AI across the enterprise. This capability is not merely a technical concern for engineers; it is a strategic requirement that closes the "production gap" and transforms AI from a series of stalled experiments into a sustained source of business value.

Organizing for AI

The transformation of an organization's human capital is non-negotiable for AI success. While a McKinsey study found that a shortage of internal AI skills hampers 58% of businesses, the challenge extends far beyond simple upskilling [13]. The fundamental issue is not a lack of training but a misalignment on the purpose of AI itself. If AI is framed as a cost-cutting tool designed to replace human labor, it will inevitably be met with skepticism and cultural resistance [22]. In our own experience, it is critical to involve business users from day one, iteratively, throughout the project to secure alignment, but also drive change management.

A successful approach requires redefining roles, not just retraining skills, and driving AI project inclusively and cross-functionally [23, 1] As AI automates routine, repetitive activities, organizations must redesign job roles to focus on the uniquely human skills that machines lack, such as creativity, problem-solving, judgment, and intuition. [24] The shift in perspective must be from "human replacement" to "human augmentation". For example, a customer service representative can be upskilled to use generative AI and chatbots to answer questions faster, freeing them to handle more complex inquiries and provide a higher level of service. [25] This is what we see with our clients, where the time that is freed up is re-invested in higher-value activities and increased variability. This reframing, where AI is viewed as a "digital colleague", is essential to building an adaptable and purpose-driven workforce [23, 7].

"The shift from AI as a tool to AI as a teammate represents a significant workplace transformation. When we move from measuring 'math done' to 'work owned,' we unlock the true potential of algorithmic businesses where digital colleagues don't just automate tasks, they participate in workflows, contribute to outcomes, and evolve alongside human team members as true operational partners."

Kristofer Kaltea, Algorithma

To drive this change, business leaders must proactively invest in learning and development programs and communicate a clear, compelling vision for how AI will enhance, not diminish, employee roles. This not only empowers employees to embrace new technologies but also strengthens institutional knowledge by combining human expertise with advanced AI capabilities.

Cultivating an AI-ready culture

An organization's culture is the most significant determinant of its AI strategy's success or failure, a principle captured by the adage that "culture will eat any AI strategy for breakfast" [22]. Without a culture of trust and transparency, employees may resist AI adoption due to fears about job security or the "digital gaslighting" of having their work questioned by imperfect AI detectors. 

Building an AI-ready culture requires leaders to address these deep-seated concerns proactively. The foundation of this culture is transparent communication, where leaders clearly articulate the goals and benefits of AI initiatives and involve employees in the decision-making process [25]. This open dialogue helps to assuage fears and build a sense of shared purpose.

Beyond communication, leadership must foster an environment that encourages cross-functional collaboration and experimentation. AI projects often require teamwork between different departments, such as IT, data science, and business units, and a siloed, territorial mindset can create insurmountable friction. Leaders must create a safe environment where employees are empowered to experiment with AI, learn from failures, and share their insights without fear of negative repercussions [1]. This approach recognizes that the successful integration of AI is not a tech problem, but a deeply human and cultural one.

The new human-AI operating model

The ultimate goal of organizational transformation is to establish a new operating model where humans and AI work as a single, cohesive team. This goes beyond simple automation or assistance tools and moves into a realm of "co-intelligence" with digital colleagues, where machines and humans augment each other's capabilities as true operational partners [2].

The shift from AI as a tool to AI as a teammate represents a fundamental change in how work gets done. Digital colleagues don't simply automate routine tasks; they participate in workflows, make autonomous decisions within defined boundaries, and contribute to outcomes alongside human team members. They possess institutional memory, understand context, and can reason across multiple domains to solve complex problems.

Consider how leading organizations are embedding digital colleagues into their operations:

  • Insurance companies, like one of our clients, deploy AI agents to work in insurance operations. actively participate in preparing decisions, escalating complex cases to human colleagues while autonomously approving or answering straightforward ones

  • Retail companies, like our client Byggmax,  employ AI teammates to drive sales and staff the customer service, working alongside human colleagues [26].

  • Financial services companies, like our VC client, are deploying AI agents to manage their portfolios financial reporting. 

Traditional AI implementations focus on clean handoffs between human and machine tasks. Digital colleagues operate differently, they work alongside humans in shared workflows, contributing their unique capabilities while leveraging human judgment, creativity, and ethical reasoning.

This creates a new form of collaborative intelligence where:

  • Humans provide strategic vision, ethical judgment, creative problem-solving, and relationship management

  • Digital colleagues provide continuous monitoring, pattern recognition, data synthesis, and rapid execution at scale

  • Together they create adaptive systems that can respond to complex, dynamic situations with both analytical rigor and human wisdom

The boundary between operator and co-creator dissolves, enabling more natural and productive collaboration between people and intelligent systems. Success in this model requires deliberate design choices:

  • Role definition: Digital colleagues need clear job descriptions, just like human team members, with defined responsibilities, escalation protocols, and performance metrics.

  • Team integration: Workflows must be redesigned to accommodate hybrid teams where digital colleagues participate in meetings (through real-time analysis), contribute to planning (through predictive insights), and execute tasks (through autonomous action).

  • Continuous learning: Both human and digital colleagues must evolve together, with feedback loops that improve collaboration over time and expand the digital colleague's capabilities as trust and competence grow [7].

For business leaders, the task is to design for this symbiotic relationship and measure success not in terms of jobs replaced, but by human potential unleashed. The key metrics shift from cost reduction to capability amplification:

  • How much time are employees spending on high-value, uniquely human work?

  • How quickly can teams adapt to new challenges with their digital colleagues?

  • What new solutions become possible when human creativity combines with AI capabilities?

  • How effectively do human-AI teams learn and improve together?

Conversely, it is also about how much work an AI agent can take on[8]. This new operating model represents AI's true value as an enabler of human potential, allowing organizations to redesign work so that both humans and machines contribute their distinctive strengths to create outcomes neither could achieve alone.

Reinventing the business

The most successful AI transformations transcend efficiency and enable the emergence of truly algorithmic businesses where digital colleagues fundamentally reshape how work gets done. These aren't automated tools or sophisticated chatbots, they are AI agents embedded in organizations as active digital colleagues: copilots, advisors, automation agents, and decision-makers that participate in workflows, influence outcomes, and contribute to business results alongside human team members. Leading organizations are discovering that when we think of AI agents as colleagues rather than tools, everything changes. It's about scope, trust, and how work is shared across the team [2].

This shift requires entirely new frameworks for management and accountability. Algorithmic businesses measure their digital colleagues by work owned, not math done [8], creating performance structures where both human and AI team members contribute to shared objectives through joint objectives, goals shared between human and AI actors, that formalize accountability across hybrid teams. Organizations deploying agentic AI at scale are achieving performance levels that neither humans nor AI could accomplish independently, scaling not by adding resources but by reimagining how work gets done.

Rather than incrementally improving existing processes, algorithmic businesses redefine core operations and explore new business models, leveraging AI and data-driven insights to create entirely new ways of working. Digital colleagues enable predictive operations that anticipate challenges before they arise, adaptive business models that learn from every customer interaction, and continuous innovation cycles that compound competitive advantages over time. Companies that achieve this transformation don't just capture cost savings, they become organizations that continuously reshape their role in the market, unlocking exponential growth opportunities that traditional business models cannot access.

Metrics of AI success

A major strategic blind spot for executives is the attempt to measure the value of AI using outdated, traditional financial metrics. A number of analysts argue that traditional ROI is an "imperfect, and often misleading, tool" for evaluating AI investments because AI moves "micro-levers across complex systems, producing both immediate and downstream effects that are hard to isolate, quantify or time-box" [27]. Without a new framework for measurement, promising AI initiatives can be perceived as underperforming and are canceled prematurely.

To address this, a new, multi-faceted measurement framework is required that captures both quantitative and qualitative value [28]. This framework should include:

  • Operational metrics: These measure the efficiency and performance of the AI system itself. Key indicators include system uptime, error rates, model latency, and request throughput.

  • Business impact metrics: These quantify the direct effect on business outcomes. Examples include a reduction in average handling time for customer inquiries, an increase in customer satisfaction scores, and the reduction of operational costs.31 Time-to-value is a critical metric that assesses how quickly a solution begins to deliver benefits.30

  • Human-centric metrics: These measure AI's impact on the human workforce and culture. Important metrics include employee adoption rates, frequency of use, span of responsibility [8], and employee experience scores (eNPS).

  • Strategic metrics: These capture the intangible, long-term value. One innovative approach is the use of "digital twins" to simulate the long-term, compounding effects of AI interventions on customer behavior, allowing leaders to quantify the value of things like customer engagement and content consumption [27].

By adopting a new suite of metrics, leaders can move beyond an outdated, one-dimensional view of AI's value and begin to measure its true, multifaceted impact on the organization. This shift in measurement transforms AI from an experimental tool into a strategic engine of deliberate business planning and innovation.

The governance engine

The ultimate safeguard against AI project failure and a critical component of strategic leadership is a comprehensive AI governance framework. These frameworks are not just for mitigating risk; they are the engine that ensures AI is developed and deployed responsibly, securely, and in alignment with a company's values [30].

The responsibility for effective AI governance ultimately rests with the CEO and senior leadership. By prioritizing a responsible approach, leaders send a clear message to all employees that AI must be used responsibly. A robust governance framework provides a structured approach to address a multitude of risks, including data privacy, algorithmic bias, and security vulnerabilities. This involves a shift from an informal or ad hoc approach to a formal, comprehensive framework that aligns with the organization's values and with global regulatory standards like the EU AI Act.

Key best practices for a formal governance framework include:

  • Data quality management: As demonstrated by the IBM Watson failure, the integrity of training data directly impacts the reliability of AI outcomes. Governance must focus on ensuring the availability of high-quality data and using AI-powered tools to automate data quality processes at scale.

  • Stakeholder engagement: A human-centered approach to AI governance requires the involvement of a wide range of stakeholders, from developers to ethicists and end-users. This fosters transparency, accountability, and a shared understanding of the ethical and practical considerations of AI.

  • Regulatory compliance: The legal and regulatory landscape around AI is evolving rapidly, and companies must stay up to date on data protection laws, privacy regulations, and industry-specific guidelines to avoid significant penalties.

By embedding governance into the core of their AI strategy, leaders can move from a reactive, risk-averse posture to a proactive one that uses governance as a tool to build trust, foster innovation, and ensure the long-term sustainability of their AI initiatives.

The need for leadership

The high failure rate of enterprise AI initiatives is not a tech problem to be solved by engineers alone; it is a leadership crisis that demands a new leadership approach. The prevailing  tendency to treat AI as a standalone IT project has resulted in a landscape of stalled AI-pilots, siloed initiatives, and underleveraged assets. To succeed, leaders must stop delegating and take ownership of the AI transformation. This requires leading across the three areas we have discussed in this whitepaper: 

  1. Evolving the tech platform for AI: Leaders must move beyond technology procurement and invest in building a scalable, data-intelligent, and operationally mature infrastructure. This includes re-architecting for cloud and edge computing, embedding data governance as an architectural requirement, and operationalizing AI at scale with MLOps, and revisiting assumptions about enterprise applications. 

  2. Organizing for AI:  Leaders must pivot from a "human replacement" mindset to one of augmentation. This means redesigning roles to focus on uniquely human skills, fostering a culture of transparent communication and trust, and creating an environment where humans and AI can collaborate as a cohesive, co-creative team.

  3. Reinventing the business: Leaders must shift their strategic focus from tactical efficiency to business model reinvention. This requires developing a new suite of metrics to measure AI’s true, multifaceted value, including intangible benefits, and establishing a comprehensive governance framework to ensure fairness, accountability, and regulatory compliance.

The future of business belongs to the companies that can move beyond a sprawling approach to AI and embrace it as a catalyst for transformation. By leading across the technical, human, and business dimensions, leaders can transform AI from a source of risk and frustration into a powerful engine of continuous innovation.

This approach enables an organization to escape the cycle of "pilot purgatory" and  misalignment  and to build a  foundation where AI is not a standalone tool, but an integral part of its operational, cultural, and strategic DNA. The path to AI success is not optional; it is a necessity for building the resilient and differentiated enterprise of tomorrow.

Next
Next

The carbon cost of intelligence - Aligning AI with planetary boundaries