What the MIT “95% Failure” Stat Gets Right...and Wrong

Leslie Lee|Aug 22, 2025

A single number from MIT’s GenAI Divide report has been circulating widely:
95% of GenAI projects fail to deliver meaningful ROI.

It’s an arresting statistic, and most commentary stops there.

But the real value of the report is not the headline. It’s what the data reveals about why enterprise AI efforts stall and what differentiates the few that move past pilots from the many that don’t.

What the Stat Gets Right

The report accurately captures a pattern many enterprise leaders recognize:

AI experimentation is easy.
AI operationalization is not.

Across industries, organizations are running dozens of pilots that never transition into durable workflows. The gap between what’s technically possible and what’s organizationally adoptable remains wide.

In that sense, the “95%” figure isn’t surprising. It reflects friction, not failure of the technology itself.

Where the Stat Misleads

Taken at face value, the number implies that GenAI is fundamentally underperforming. That conclusion doesn’t hold up under closer inspection.

The report itself points to a more nuanced reality.

Employees Are Ahead of Their Organizations

At roughly 90% of surveyed companies, employees already use tools like ChatGPT at work, often without formal approval. Only about 40% of those organizations have enterprise licenses in place.

A shadow AI economy already exists inside most enterprises, quietly reshaping expectations around speed, usability, and autonomy.

The issue isn’t demand. It’s governance and integration.

Adoption Works When AI Is Ambient

The most successful deployments share a common trait: AI is embedded where work already happens.

When teams are forced into new interfaces or additional steps, adoption stalls. When agentic capabilities operate inside familiar tools and workflows, they feel additive rather than disruptive.

This pattern shows up repeatedly across support, operations, finance, and IT. In other words, in back-office functions that aren’t driven by novelty or demos.

Buy and Partner Beats Build-Only

The data also shows that externally partnered solutions — particularly those that are customizable and capable of learning — outperform purely internal builds by a wide margin.

The most successful enterprises treat AI vendors less like off-the-shelf software providers and more like long-term operational partners. They expect:

  • customization
  • accountability to business outcomes
  • co-evolution through early failures

Benchmarks alone don’t predict success. Integration and learning do.

ROI Lives in the Back Office

While 50–70% of GenAI budgets are currently allocated to sales and marketing pilots, the largest returns often come from less visible domains:

  • eliminating millions in BPO spend
  • reducing agency costs
  • accelerating risk reviews and compliance processes

These aren’t flashy wins, but they compound quickly — and they’re far easier to operationalize.

The Real Signal in the Data

The takeaway from the MIT report isn’t that GenAI is failing.

It’s that learning capability plus seamless integration is the dividing line.

Agentic systems that adapt over time, operate across enterprise data, and stay embedded in real workflows are the ones crossing the GenAI divide. Everything else remains stuck in pilot mode.

Amara’s Law applies here:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

Today’s uneven ROI isn’t a verdict. It’s a snapshot of a transition period.

The more important question isn’t whether GenAI is failing — it’s whether enterprises are willing to invest in the operational foundations required for it to succeed.