Deerfield Green
AI Leadership

The First 90 Days as an AI Leader

Enterprise AI data brief: the first 90 days as an AI leader

Research brief delivering verified statistics, source assessments, and corrections for the first-90-days AI leadership playbook. The most important correction: the “64% of CEOs” McKinsey attribution is a misquote.


The Fortune “report” is actually an opinion piece — and the McKinsey “64%” is misattributed

The claimed Fortune February 2026 report is actually a commentary article titled “The AI leadership reckoning is here” by May Habib, CEO of Writer, published February 24, 2026. It is a guest editorial, not a data-driven research report. It argues that C-suite turnover hit record highs in 2025 because organizations haven’t structurally changed despite AI investment. Cite it as an opinion piece on Fortune.com, not a Fortune research report.

The McKinsey “64% of CEOs” claim requires correction. McKinsey’s State of AI 2025 does contain a 64% figure, but it refers to respondents saying AI has improved their ability to innovate — not that AI success depends more on people than technology. The sentiment, however, is strongly supported by other McKinsey data. McKinsey North America Chair Eric Kutcher stated in 2025: “This is probably the biggest, most complex transformation we’ve seen — but it’s 80% business transformation and 20% tech transformation.” McKinsey’s Superagency report (January 2025) found that the biggest barrier to scaling “is not employees — who are ready — but leaders, who are not steering fast enough.” For the article, use Kutcher’s 80/20 quote or the Superagency finding instead.

S&P Global abandonment data: confirmed from primary source

S&P Global Market Intelligence’s “Voice of the Enterprise: AI & Machine Learning, Use Cases 2025” survey (1,006 respondents, October–November 2024, published March 12, 2025) confirms that 42% of companies abandoned most AI initiatives, up from 17% the prior year, with the average organization scrapping 46% of proof-of-concept projects before production. Companies cited cost, data privacy, and security risks as top obstacles. 46% of respondents reported no single enterprise objective had seen “strong positive impact” from GenAI investment. This is among the most widely cited enterprise AI statistics of 2025.

Pertama Partners data is compiled, not original research

Pertama Partners’ “AI Project Failure Statistics 2026” (published February 8, 2026) presents the 84% leadership-driven failures and 61% treated as IT projects figures. However, this is a content-marketing compilation synthesizing data from RAND Corporation, S&P Global, McKinsey, Deloitte, and their own client data (2,400+ enterprise AI initiatives), not independent peer-reviewed research. The 84% figure traces primarily to RAND Corporation’s August 2024 report, where more than 80% of interviewees cited leadership/misalignment as the primary failure cause. The 61% figure appears to be Pertama’s own analysis. Additional Pertama statistics: average sunk cost per abandoned project of $4.2 million; large enterprises lost an average $7.2 million per failed initiative and abandoned 2.3 initiatives in 2025; change management receives less than 15% of total project budget in most failed projects; user adoption metrics not tracked in 71% of projects.

For the article, attribute the 84% to “RAND Corporation and Pertama Partners analysis” rather than Pertama alone. The 61% figure should be attributed specifically to Pertama Partners’ compiled findings.

Change management is the overlooked crisis — the data is overwhelming

Multiple sources confirm that people and change management, not technology, determine AI success or failure. McKinsey’s Superagency report found leaders underestimate employee AI readiness by — leaders estimate 4% of employees use GenAI for 30%+ of their tasks, but the actual figure is ~13%. Meanwhile, 47% of employees believe AI could handle 30%+ of their jobs within a year, versus only 20% of executives. A Writer/Enterprise AI survey found 41% of Millennial and Gen Z employees admit to sabotaging their company’s AI strategy by refusing to use AI tools. Kyndryl’s May 2025 report found ~70% of leaders say their workforce isn’t ready to leverage AI, while only 14% of companies (“AI pacesetters”) have fully aligned workforces. Pacesetters were 3× more likely to have implemented a change management strategy. BCG’s AI at Work 2025 found frontline AI usage has stalled at 51% regular use, while 75%+ of leaders and managers use GenAI several times weekly — a dangerous adoption gap.

The 90-day playbook: a converging consensus

Virtually all frameworks converge on a three-phase, 90-day approach for new AI leaders. Days 1–30 focus on assessment and alignment: inventory existing AI initiatives, define 3–5 North Star metrics, select 2–3 production-intent quick wins, establish governance (risk tiers, approval rules, logging), and baseline current performance. The EverWorker framework’s key gating principle: “If you can’t baseline it, don’t start it.” IBM’s CAIO guidance emphasizes conducting an organizational maturity assessment and documenting agency-wide ethics and values in this first phase.

Days 31–60 shift to pilot deployment: launch 1–2 AI workflows with human escalation, integrate into daily operations (Slack, CRM, ERP), implement monitoring for quality, exceptions, cost, and risk. CIT Solutions recommends the “Golden Triangle” formula for selecting pilots: High Pain + Low Complexity + Clear ROI. Shadow mode deployment — running new AI in parallel with existing processes without customer exposure — is becoming standard practice for validation before production cutover.

Days 61–90 focus on scale or sunset decisions: publish measured outcomes against North Star metrics, formalize the governance playbook, make scale/sustain/stop decisions per use case, and begin AI workforce planning. Industry benchmarks suggest leaders complete first pilots in 60–90 days versus 6–12 months for laggards, then ship subsequent systems every 4–6 weeks versus quarterly.

The Umbrex fractional CAIO playbook extends this to 180 days, targeting vendor consolidation and cost-per-1K-tokens down ≥25% by Day 90, two Horizon 1 features live to 50% traffic with measured ROI by Day 135, and a full audit dress rehearsal by Day 180.

Governance frameworks are evolving from gates to enablers

The most successful governance approaches treat governance as a speed enabler, not a bottleneck. ModelOp’s 2025 AI Governance Benchmark (100 senior AI/data leaders) found 80% of enterprises have 50+ GenAI use cases in pipeline, but 56% say it takes 6–18 months to move from intake to production, and 44% say governance processes are too slow. The emerging best practice uses risk-tiered governance: low-risk applications (internal productivity tools) get fast-track approval, medium-risk applications require shadow mode testing, and high-risk applications (customer-facing, financial) require full review. NIST’s AI Risk Management Framework provides the reference architecture with four functions: Govern, Map, Measure, Manage. Gartner reports over 60% of enterprises will require formal AI governance by 2026.

The shadow AI challenge adds urgency: 79% of IT leaders have encountered unauthorized AI deployments (Nutanix 2026 Enterprise Cloud Index), and 75% of global knowledge workers use AI at work with the majority bringing their own tools (Microsoft/LinkedIn). The strategic response is to channel shadow AI into governed momentum rather than prohibit it — provide sanctioned alternatives and implement governance as enablement.

What distinguishes AI winners: workflow redesign above all

Across McKinsey, BCG, and Deloitte, workflow redesign emerges as the single strongest predictor of AI success. McKinsey tested 31 variables and found organizations that fundamentally redesign workflows are 3× more likely to achieve high performance. BCG found that ~90% of future-built companies expect most AI value from “reshaping and inventing business processes,” and leaders allocate over 80% of AI investments to reshaping key functions. BCG’s widely cited 10-20-70 rule captures the investment imbalance needed: 10% on algorithms, 20% on technology, and 70% on people and processes.

The CAIO role is crystallizing rapidly — from 11% of organizations in 2023 to 26% in 2025 (IBM Institute for Business Value). Companies with a CAIO report stronger AI ROI. The U.S. federal government now requires agencies to designate CAIOs, with an OMB-mandated AI Council established within 90 days of the April 2025 executive memo. HBR Analytic Services (December 2025) found only 6% of companies fully trust AI agents to handle core business processes, while 43% trust them only for limited or routine tasks — underscoring the change management challenge ahead.


Corrections and source-quality summary

The strongest, most citable primary sources: McKinsey’s State of AI 2025 and Superagency report (January 2025); S&P Global’s Voice of the Enterprise survey (March 2025); BCG’s “Build for the Future 2025” (September 30, 2025); Deloitte’s State of AI in the Enterprise 2026; and RAND Corporation’s August 2024 AI project failure report.