What Is the 10-20-70 Rule?
The 10-20-70 rule is a framework for understanding where AI value is actually created inside an enterprise. It divides the ingredients of AI success into three buckets — and the proportions are far more surprising than most technology buyers expect.
"The 10-20-70 rule applies to AI success: 10% depends on algorithms, 20% on technology infrastructure, and 70% on people and processes. Organizations that focus exclusively on model selection while neglecting training, change management, and workflow redesign will fail regardless of their technical investments." — The AI Strategy Blueprint, Chapter 1, John Byron Hanby IV
The rule originates from BCG research across thousands of enterprise AI engagements. It was validated independently through Accenture GenAI talent studies and is documented as the organizing principle of all 16 chapters of The AI Strategy Blueprint — from governance to ROI quantification to change management, every framework in the book maps back to this single insight.
The rule does not say technology does not matter. A broken model produces broken output. What it says is that once your technology passes a minimum threshold — which is lower than most organizations think — the decisive variable shifts entirely to your workforce.
Source: BCG enterprise AI research, cited in The AI Strategy Blueprint (Chapters 1 & 6)
The 10% — Algorithms
The algorithms matter. A model with a 20% hallucination rate on factual queries is not production-ready. A model with insufficient context window cannot process your contracts. Basic technical thresholds are real.
But here is what the data actually shows about where model selection falls in the hierarchy of AI success factors: it is at the bottom. And getting more crowded every quarter.
Frontier AI capabilities are commoditizing at a pace that has surprised even the analysts who predicted it. The same LLMs powering the most sophisticated enterprise deployments are accessible to every organization through standard API keys. A 3-billion parameter model running locally on a laptop today achieves comparable quality to the original ChatGPT release. Within 6 to 12 months of any given date, models matching the previous year's frontier capability typically become available for local deployment.
The implication is stark: model selection is not a source of competitive differentiation. Your competitor has access to the same models you do. The question is not which model you chose — the question is how effectively your organization deploys it.
The 20% — Technology Infrastructure
Technology infrastructure is the second ingredient: data pipelines, deployment architecture, security wrapping, compliance frameworks, and integration scaffolding. This layer is necessary. Without it, nothing runs. But it is also not sufficient.
Organizations frequently discover this the hard way. A 12-month infrastructure build concludes. The AI platform is deployed. Security has been reviewed. Compliance has signed off. And then... adoption stalls. Employees cannot figure out how to incorporate the tool into their workflows. Power users emerge but remain isolated. The platform gathers dust while leadership escalates pressure.
The infrastructure problem is solvable with engineering time and money. The people problem cannot be solved with either. It requires structured training, visible leadership commitment, and the kind of sustained change management investment that most IT-led AI initiatives never plan for.
On the architecture side, the fastest path to production is often the simplest. Local, secure AI chat assistants — like AirgapAI — deploy in hours rather than months, require no external security approvals because data never leaves the device, and remove the restriction paradox that cripples cloud-based adoption: employees can finally use their actual work data, customer data, and proprietary content without fear of exposure. This architectural simplicity is itself a people enabler — it removes the friction that prevents experimentation and learning.
The 70% — People and Process
This is where the transformation actually lives.
The statistics on AI workforce readiness are sobering and consistent across every major research source:
-
Only 8% of managers possess AI skills — leaving the vast majority of organizational decision-makers unable to direct or evaluate AI work effectively.
-
Just 1 in 4 employees demonstrates high generative AI fluency — meaning three-quarters of your workforce cannot reliably extract value from AI tools they have already been given.
-
Two-thirds of workers report inadequate training on the AI tools their organizations have deployed.
-
54% of employees use shadow AI — unsanctioned external tools like ChatGPT, Claude, Gemini, and Perplexity — because their organization failed to provide a sanctioned, trusted alternative.
What does the gap look like in practice? Employees receive AI tools, launch them once or twice, struggle to get useful outputs, conclude that AI does not work for their job, and return to familiar methods. The tool sits unused. When someone asks why adoption numbers are low, the employee reports that they "tried it" but it "didn't really help."
The problem is never the technology. The problem is that employees were never taught how to communicate with AI effectively. They were handed a power tool without a user manual, evaluated it against their inability to use it, and filed it away.
"BCG research reveals that 70% of AI success depends on people and processes, not technology. The 10-20-70 rule frames the challenge: 10% of value comes from algorithms, 20% from data and technology infrastructure, and 70% from how organizations transform their workflows and people." — The AI Strategy Blueprint, Chapter 6 (Change Management)
The 70% encompasses five interconnected people-and-process elements:
AI Literacy
Structured curricula that teach employees how to communicate with AI — from foundational prompt engineering to role-specific workflows.
Champion Networks
Internal advocates at every level — IT, department heads, and executives — whose peer-led demonstration accelerates adoption faster than any mandate.
Workflow Redesign
Deliberate restructuring of how work gets done — AI does not just make processes faster, it changes which processes are needed at all.
Change Management
Addressing the psychology of transformation: fear of replacement, AI stigma, tool fatigue, and the committee paralysis trap.
Governance as Enabler
Risk-tiered frameworks that enable safe experimentation rather than creating barriers that push employees toward shadow AI.
The 3x Success Multiplier
The business case for investing in the 70% is not philosophical — it is quantified by BCG research and cited directly in The AI Strategy Blueprint:
"Responsible AI" in BCG's framing is not primarily about ethics — it is about the full-stack investment in people, process, and governance that the 70% represents. Organizations that invest in this layer are three times more likely to achieve the transformational value they set out to capture.
BCG also found that 88% of advanced AI users report AI makes their work more enjoyable. Once employees cross the fluency threshold, adoption becomes self-sustaining. The challenge is getting enough people to advanced usage levels that they can serve as advocates who pull their colleagues forward.
And peer learning is the number one driver of AI skill acquisition — 69% of respondents in BCG research cite colleagues as their primary learning channel. Formal training programs matter, but the real acceleration happens when knowledgeable colleagues are available to demonstrate techniques and model effective usage. This is the champion network dynamic, and it is entirely a people investment.
By