The Accountability Gap in AI Leadership
When everyone is responsible for AI success, no one is.
The Accountability Gap: Why Your AI Investments Have No Owner
When everyone is responsible for AI success, no one is.
Published: March 3, 2026
Author: Tommy Kenny
Category: AI Leadership
Reading Time: 6 minutes
The Uncomfortable Math
Here's what the latest research reveals about enterprise AI investments:
- 1 in 50 delivers transformational value (Gartner)
- 1 in 5 delivers any measurable ROI
- 95% of enterprise AI pilots create no measurable P&L impact (MIT Media Lab's Project NANDA)
- 62% of organizations have never advanced past the pilot stage
Yet companies are preparing to double their AI spending in 2026—from an average of 0.8% of revenue to 1.7%, according to BCG.
The gap between investment and return has a clinical name in analyst frameworks: the Trough of Disillusionment. But calling it disillusionment suggests the problem is emotional. The evidence points to something more structural.
It's an accountability gap.
The CEO Problem
Three-quarters of CEOs are now their organization's primary AI decision-maker, according to BCG research published in January 2026. The Conference Board's 2026 C-Suite Outlook confirms that AI has moved from the margins to the center of corporate strategy—simultaneously identified as a top investment priority, a leading external risk, and a governance concern.
That concentration of authority would be appropriate if CEOs had frameworks to evaluate what they're approving.
Most do not.
A Gartner survey found that fewer than 30% of CEOs were satisfied with returns on their AI investments—yet spending continues to accelerate. When a leader cannot define what success looks like before a project launches, or identify failure when it arrives, accountability has nowhere to land.
The result is a pattern that repeats across industries:
- AI initiative gets approved with vague success criteria
- Initiative launches with fanfare
- Months pass without clear measurement
- Initiative quietly gets shelved or deprioritized
- No post-mortem, no lessons learned
- Next initiative approved with the same pattern
Technology Decision or Leadership Decision?
The fundamental error is treating AI adoption as a technology decision when it is fundamentally a leadership and governance decision.
Harvard's framework for responsible AI governance identifies accountability as foundational. Every significant AI decision must have a designated business owner who can:
- Explain why this specific investment was made
- Adjust when outcomes don't match expectations
- Answer for results—positive or negative
Most AI projects have none of these.
They have sponsors who approve budgets. They have technical teams who build systems. They have users who adopt tools. But they rarely have a single owner who is accountable for the business outcome.
The Pilot Purgatory Pattern
Kyndryl's 2025 Readiness Report surveyed 3,700 senior leaders across 21 countries and found that 62% of organizations have not advanced their AI projects beyond the pilot stage.
This isn't a technology scaling problem. It's an accountability scaling problem.
Pilots are easy to approve because they're cheap and reversible. Moving from pilot to production requires someone to own the outcome—to stake their reputation on it working. Without clear accountability structures, pilots accumulate indefinitely while production deployments stall.
The organizations stuck in pilot purgatory share common traits:
- Success criteria defined after launch (or never)
- No single owner with authority to adjust or kill the project
- Progress measured in activity (users, adoption, engagement) rather than impact
- No post-mortem process when projects underperform
What Accountability Actually Requires
The organizations pulling measurable returns from AI have something beyond good governance frameworks. They have leaders who understand that AI adoption is a human change management challenge as much as a technology one.
Here's what effective AI accountability looks like:
1. Named Business Owners (Not Technical Owners)
Every AI initiative needs a business leader—not a technical leader—who owns the outcome. This person should be able to answer: "If this works, what business metric improves by how much?"
If no one can answer that question, the project shouldn't launch.
2. Pre-Defined Success Criteria
Before any significant AI investment, define:
- What specific metric will this improve?
- By how much?
- In what timeframe?
- How will we measure it?
These criteria should be documented and reviewed quarterly. If the project isn't tracking toward its goals, the owner must either adjust or recommend termination.
3. Quarterly Kill Reviews
Schedule explicit decision points where projects can be killed or scaled. Most organizations never create formal mechanisms to terminate underperforming AI projects—so they linger indefinitely, consuming resources and generating "activity metrics" that obscure their failure.
4. Post-Mortems for Every Outcome
Whether a project succeeds or fails, conduct a structured review:
- What worked?
- What didn't?
- What would we do differently?
- What should the next project learn from this?
These reviews should be required—and their learnings should be visible to leaders approving the next round of investments.
The Investment Paradox
Companies are doubling AI spending despite poor returns. This isn't irrational—it reflects genuine competitive pressure. Leaders fear that failing to invest in AI will leave them behind.
But doubling down on investments without fixing the accountability gap just compounds the problem. You're not increasing your chances of transformation. You're increasing your exposure to the same failure pattern, at twice the scale.
The organizations that will pull ahead aren't those spending the most on AI. They're those who've solved the accountability problem:
- Every project has an owner
- Every owner has clear success criteria
- Every outcome—positive or negative—generates learning
- Leadership can explain why they're investing in specific initiatives
Three Questions for Your Next AI Decision
Before approving the next AI investment, ask:
1. "Who owns this outcome?"
Not who sponsors it, not who builds it, not who uses it. Who is accountable for the business result? If no one can answer clearly, don't approve it.
2. "What does success look like in 90 days?"
Not "adoption" or "engagement"—what business metric improves? By how much? If the answer is vague, the project will generate activity without accountability.
3. "What triggers termination?"
Under what conditions do we kill this project? If there's no explicit failure criteria, the project will live forever in pilot purgatory, consuming resources without producing results.
The Bottom Line
The AI accountability gap isn't about technology failing. It's about leadership structures that were never designed for the pace and scale of AI investment.
When CEOs become primary AI decision-makers without frameworks to evaluate outcomes, accountability evaporates. When projects launch without clear owners, success criteria, or termination triggers, failure becomes invisible.
The organizations that will transform are not those investing the most aggressively. They're those who've built the leadership infrastructure to ensure that every dollar invested has an owner who can explain what it produced.
Investment without accountability isn't bold. It's expensive.
Tommy Kenny is the founder of Digital Executive Insight and author of Pragmatic Disruption. He advises executives on building AI strategies with clear accountability and measurable outcomes.
Related Reading:
- Coordination Theater — Why your AI strategy looks better from the top
- The Conviction-Execution Gap — Why knowing AI matters isn't enough
- Why 70% of AI Projects Fail — And what smart executives do differently
Sources: Gartner AI ROI research, MIT Media Lab Project NANDA "GenAI Divide: State of AI in Business 2025," BCG AI spending analysis (January 2026), The Conference Board 2026 C-Suite Outlook Survey, Harvard responsible AI governance framework, Kyndryl 2025 Readiness Report
Executive Takeaways
This article covers key insights for leadership. Apply these frameworks to drive measurable results in your organization.
Get More Executive Insights
Weekly briefings with frameworks like this one. Join 15,000+ executives.