AI Strategy Framework

The 10-20-70 Rule: Why 70% of AI Success Depends on People, Not Algorithms

BCG research across thousands of enterprise engagements reached a counterintuitive conclusion: the model you choose is nearly irrelevant. The people you train, the processes you redesign, and the culture you build — that is where AI transformation lives or dies.

John Byron Hanby IV By John Byron Hanby IV, CEO & Founder, Iternal Technologies January 15, 2026 • Updated April 8, 2026 9 min read
10% Algorithms
20% Technology
70% People & Process
3x Success Multiplier
Trusted by enterprise AI leaders
Government Acquisitions
Government Acquisitions
Government Acquisitions
TL;DR

The Quick Answer

The 10-20-70 rule states that AI success is 10% algorithms, 20% technology infrastructure, and 70% people and processes. Defined in BCG research and documented as the organizing principle of The AI Strategy Blueprint, the rule explains why most AI investments stall: organizations obsess over model selection while neglecting the training, change management, and workflow redesign that actually determine outcomes. Only 8% of managers possess AI skills. Just 1 in 4 employees demonstrates high generative AI fluency. Two-thirds report inadequate training. The 70% is not a soft HR concern — it is the hard constraint that makes or breaks your AI investment.

What Is the 10-20-70 Rule?

The 10-20-70 rule is a framework for understanding where AI value is actually created inside an enterprise. It divides the ingredients of AI success into three buckets — and the proportions are far more surprising than most technology buyers expect.

"The 10-20-70 rule applies to AI success: 10% depends on algorithms, 20% on technology infrastructure, and 70% on people and processes. Organizations that focus exclusively on model selection while neglecting training, change management, and workflow redesign will fail regardless of their technical investments." — The AI Strategy Blueprint, Chapter 1, John Byron Hanby IV

The rule originates from BCG research across thousands of enterprise AI engagements. It was validated independently through Accenture GenAI talent studies and is documented as the organizing principle of all 16 chapters of The AI Strategy Blueprint — from governance to ROI quantification to change management, every framework in the book maps back to this single insight.

The rule does not say technology does not matter. A broken model produces broken output. What it says is that once your technology passes a minimum threshold — which is lower than most organizations think — the decisive variable shifts entirely to your workforce.

10% Algorithms Model selection, prompt engineering, fine-tuning
20% Technology Data pipelines, infrastructure, compliance architecture
70% People & Process AI literacy, change management, workflow redesign, champion networks

Source: BCG enterprise AI research, cited in The AI Strategy Blueprint (Chapters 1 & 6)

The 10% — Algorithms

The algorithms matter. A model with a 20% hallucination rate on factual queries is not production-ready. A model with insufficient context window cannot process your contracts. Basic technical thresholds are real.

But here is what the data actually shows about where model selection falls in the hierarchy of AI success factors: it is at the bottom. And getting more crowded every quarter.

Frontier AI capabilities are commoditizing at a pace that has surprised even the analysts who predicted it. The same LLMs powering the most sophisticated enterprise deployments are accessible to every organization through standard API keys. A 3-billion parameter model running locally on a laptop today achieves comparable quality to the original ChatGPT release. Within 6 to 12 months of any given date, models matching the previous year's frontier capability typically become available for local deployment.

The implication is stark: model selection is not a source of competitive differentiation. Your competitor has access to the same models you do. The question is not which model you chose — the question is how effectively your organization deploys it.

The same frontier models are available to every enterprise. The performance gap between AI leaders and laggards has nothing to do with which LLM they chose — and everything to do with how well they prepared their people to use it.

The 20% — Technology Infrastructure

Technology infrastructure is the second ingredient: data pipelines, deployment architecture, security wrapping, compliance frameworks, and integration scaffolding. This layer is necessary. Without it, nothing runs. But it is also not sufficient.

Organizations frequently discover this the hard way. A 12-month infrastructure build concludes. The AI platform is deployed. Security has been reviewed. Compliance has signed off. And then... adoption stalls. Employees cannot figure out how to incorporate the tool into their workflows. Power users emerge but remain isolated. The platform gathers dust while leadership escalates pressure.

The infrastructure problem is solvable with engineering time and money. The people problem cannot be solved with either. It requires structured training, visible leadership commitment, and the kind of sustained change management investment that most IT-led AI initiatives never plan for.

On the architecture side, the fastest path to production is often the simplest. Local, secure AI chat assistants — like AirgapAI — deploy in hours rather than months, require no external security approvals because data never leaves the device, and remove the restriction paradox that cripples cloud-based adoption: employees can finally use their actual work data, customer data, and proprietary content without fear of exposure. This architectural simplicity is itself a people enabler — it removes the friction that prevents experimentation and learning.

The 70% — People and Process

This is where the transformation actually lives.

The statistics on AI workforce readiness are sobering and consistent across every major research source:

  • Only 8% of managers possess AI skills — leaving the vast majority of organizational decision-makers unable to direct or evaluate AI work effectively.
  • Just 1 in 4 employees demonstrates high generative AI fluency — meaning three-quarters of your workforce cannot reliably extract value from AI tools they have already been given.
  • Two-thirds of workers report inadequate training on the AI tools their organizations have deployed.
  • 54% of employees use shadow AI — unsanctioned external tools like ChatGPT, Claude, Gemini, and Perplexity — because their organization failed to provide a sanctioned, trusted alternative.

What does the gap look like in practice? Employees receive AI tools, launch them once or twice, struggle to get useful outputs, conclude that AI does not work for their job, and return to familiar methods. The tool sits unused. When someone asks why adoption numbers are low, the employee reports that they "tried it" but it "didn't really help."

The problem is never the technology. The problem is that employees were never taught how to communicate with AI effectively. They were handed a power tool without a user manual, evaluated it against their inability to use it, and filed it away.

"BCG research reveals that 70% of AI success depends on people and processes, not technology. The 10-20-70 rule frames the challenge: 10% of value comes from algorithms, 20% from data and technology infrastructure, and 70% from how organizations transform their workflows and people." — The AI Strategy Blueprint, Chapter 6 (Change Management)

The 70% encompasses five interconnected people-and-process elements:

AI Literacy

Structured curricula that teach employees how to communicate with AI — from foundational prompt engineering to role-specific workflows.

Champion Networks

Internal advocates at every level — IT, department heads, and executives — whose peer-led demonstration accelerates adoption faster than any mandate.

Workflow Redesign

Deliberate restructuring of how work gets done — AI does not just make processes faster, it changes which processes are needed at all.

Change Management

Addressing the psychology of transformation: fear of replacement, AI stigma, tool fatigue, and the committee paralysis trap.

Governance as Enabler

Risk-tiered frameworks that enable safe experimentation rather than creating barriers that push employees toward shadow AI.

The 3x Success Multiplier

The business case for investing in the 70% is not philosophical — it is quantified by BCG research and cited directly in The AI Strategy Blueprint:

3x
Responsible AI implementation triples the chances of capturing full AI benefits.

"Responsible AI" in BCG's framing is not primarily about ethics — it is about the full-stack investment in people, process, and governance that the 70% represents. Organizations that invest in this layer are three times more likely to achieve the transformational value they set out to capture.

BCG also found that 88% of advanced AI users report AI makes their work more enjoyable. Once employees cross the fluency threshold, adoption becomes self-sustaining. The challenge is getting enough people to advanced usage levels that they can serve as advocates who pull their colleagues forward.

And peer learning is the number one driver of AI skill acquisition — 69% of respondents in BCG research cite colleagues as their primary learning channel. Formal training programs matter, but the real acceleration happens when knowledgeable colleagues are available to demonstrate techniques and model effective usage. This is the champion network dynamic, and it is entirely a people investment.

The AI Strategy Blueprint book cover
Source of This Framework

The AI Strategy Blueprint

The 10-20-70 rule is the organizing principle for all 16 chapters of The AI Strategy Blueprint. Every framework — from governance to ROI to security — maps back to the insight that people and process matter more than model selection. Get the complete playbook on Amazon.

5.0 Rating
$24.95

The Failure Mode: Treating AI as an IT Project

The central thesis of The AI Strategy Blueprint is stated plainly in Chapter 1:

"AI is not a technology project. It is a business transformation. A transformation bigger and more significant than any to come before. Organizations that approach AI as an IT initiative, delegating decisions to technical committees and evaluating solutions against infrastructure specifications, consistently fail to capture meaningful value." — The AI Strategy Blueprint, Chapter 1

The IT project failure mode is so common it has a name: pilot purgatory. Multiple pilots run indefinitely. Proofs of concept accumulate. Impressive demos never reach production. Meanwhile, competitors deploy AI to real business problems, compound their learning advantages, and systematically widen the gap.

The committee paralysis trap is a variant of the same failure. Many enterprise AI committees accomplish nothing because they become discussion forums rather than action-oriented teams. The organizations seeing AI success are those that bypass committee structures and engage directly with specific use cases and measurable outcomes.

The root cause is always the same: the organization funded the 10% (models) and the 20% (infrastructure) while underfunding or ignoring the 70% (people and process). Without workforce readiness, there is no one to carry the transformation into production.

Consider the stark adoption mathematics. The typical enterprise has identified hundreds of AI use cases but deployed fewer than six to production. Only 22% of organizations have moved beyond proof-of-concept. Only 4% are generating substantial value. This is not a technology failure — the technology works. It is a people-and-process failure at scale.

Closing the 70% Gap

The good news: the 70% gap is closable. It requires deliberate investment, a clear sequence, and the willingness to treat AI adoption as the business transformation it actually is.

Step 1: Deploy Secure, Unrestricted AI Access First

The most common adoption killer is restriction. Organizations deploy cloud AI tools and immediately limit what employees can input. Sensitive data cannot be entered. Customer information must be excluded. The restrictions, while well-intentioned, create a paradox: the majority of knowledge work involves precisely the data that cannot be used with cloud AI. Employees who can only apply AI to a narrow slice of their work never develop fluency.

The solution is a secure, local AI assistant — like AirgapAI — that processes data on-device with no external transmission. When employees know nothing leaves their device, they stop self-censoring and start experimenting. Unrestricted experimentation is the prerequisite for AI literacy.

Step 2: Build a Structured AI Literacy Curriculum

Generic AI training produces generic results. Effective literacy programs are role-based: the AI skills a marketing manager needs are different from those a finance analyst or compliance officer requires. The curriculum must progress from foundational prompt communication to role-specific workflow integration.

Leadership multiplier effects are real and documented. Improving leadership AI sentiment from 15% positive to 55% positive produces corresponding adoption gains across the entire organization that reports to those leaders. Executive literacy is not optional — it is the lever that moves everything beneath it.

Step 3: Identify and Activate Champion Networks

Every organization has employees who will become power users the moment they have access to capable tools. They are not always the most senior or technical staff. Often, mid-level knowledge workers with immediate practical use cases demonstrate the most enthusiasm and creativity. By deploying AI broadly — a shotgun approach — organizations naturally surface these champions through their behavior: who experiments independently, who shares tips unprompted, who asks "what if" questions about new applications.

Once identified, champions require cultivation: advanced training access, direct connection to the AI strategy team, and protected time for experimentation. Peer learning at 69% makes champions the most cost-effective literacy investment available.

Step 4: Follow the Deploy-Reshape-Invent Sequence

BCG's three-horizon framework provides the sequencing discipline that prevents organizations from attempting transformation before their foundation is built:

1
Deploy (0–6 months) — Quick wins that augment existing workflows without process redesign. Document summarization, email drafting, meeting transcription. Build comfort. Demonstrate value.
2
Reshape (6–18 months) — Process redesign around AI capabilities. Eliminate redundant steps, redefine roles, restructure how teams collaborate. Only begins after the Deploy foundation is established.
3
Invent (18+ months) — Business model innovation enabled by full organizational AI capability. Organizations that attempt to Invent before they Deploy set themselves up for expensive failures.

The $135M Stakes

The 10-20-70 rule is not just a strategic concept. It is a financial calculation. Chapter 16 of The AI Strategy Blueprint makes the productivity math explicit:

10,000 knowledge workers
×
3.5 hrs saved per week per worker
×
$75 fully loaded cost per hour
=
$135M Annual productivity value

Research indicates that AI users save approximately 3.5 hours per week — with AI literacy still remaining extremely low. Every year of delayed investment, that $135M accrues to your competitors instead.

The 3.5 hours per week savings is not an aspiration for some future AI-literate workforce. It is what organizations achieve today, with current literacy levels still near the floor. Closing the 70% gap — investing in the AI literacy, workflow redesign, and change management that the rule identifies as the decisive ingredient — amplifies that number substantially.

The question is not whether your organization can afford to invest in the 70%. The question is whether it can afford not to. For more on the financial implications, see our deep-dive on the cost of AI inaction.

AI Academy

The 70% Lives Here

AI Academy is Iternal's direct fulfillment of the 70% — the structured learning platform that closes the literacy gap BCG identified as the primary barrier between AI investment and AI value. Role-based curricula, 500+ courses, certification programs aligned with the EU AI Act, and a $7/week trial that puts AI fluency within reach of every employee.

  • 500+ courses across beginner, intermediate, advanced
  • Role-based curricula: Marketing, Sales, Finance, HR, Legal, Operations
  • Certification programs aligned with EU AI Act Article 4 literacy mandate
  • $7/week trial — start learning in minutes
Explore AI Academy
500+ Courses
$7 Weekly Trial
8% Of Managers Have AI Skills Today
$135M Productivity Value / 10K Workers
Expert Guidance

Bridge the 70% With Expert Guidance

Our consulting programs are designed around the 10-20-70 rule — we do not lead with model selection. We lead with people readiness, champion network activation, and change management architecture. Then we layer in the technology.

$566K+ Bundled Technology Value
78x Accuracy Improvement
6 Clients per Year (Max)
Masterclass
$2,497
Self-paced AI strategy training with frameworks and templates
Transformation Program
$150,000
6-month enterprise AI transformation with embedded advisory
Founder's Circle
$750K-$1.5M
Annual strategic partnership with priority access and equity alignment
FAQ

Frequently Asked Questions

The 10-20-70 rule is a framework for understanding where AI success actually comes from. Only 10% of AI success depends on the algorithms or models chosen. Another 20% depends on technology infrastructure — data pipelines, deployment architecture, compliance wrapping. The remaining 70% — the decisive majority — depends on people and processes: workforce literacy, change management, workflow redesign, and cultural adoption. Organizations that focus exclusively on model selection while neglecting the 70% will fail regardless of their technical sophistication.

The 10-20-70 rule is rooted in BCG research on enterprise AI adoption and is documented extensively in The AI Strategy Blueprint by John Byron Hanby IV. BCG's work across thousands of enterprise AI engagements consistently found that the organizations achieving transformational AI value were not technologically superior — they had built superior institutional capability for deploying AI effectively. The rule is cited in Chapters 1 and 6 of the book as the organizing principle for all 16 chapters.

Because the technology is accessible to everyone. Frontier AI models are available to any organization — the same LLMs that power your competitors also power you. What cannot be purchased off a shelf is organizational capability: an AI-literate workforce, redesigned workflows that integrate AI effectively, champion networks that drive adoption, and change management that transforms how people think about their work. The scarcity is not the model; it is the organizational capacity to deploy the model at scale.

Organizations that treat AI as an IT project — focusing exclusively on model selection and infrastructure — consistently fail to capture meaningful value. They pilot endlessly, accumulate proofs of concept, and generate impressive demonstrations that never reach production. The statistics are stark: only 1 in 5 AI initiatives achieve ROI, and 1 in 50 deliver true transformation. The common thread in nearly every failure is not bad technology — it is inadequate investment in people and process change.

Implementing the 70% requires a coordinated people-and-process strategy: deploy a secure AI chat assistant that every employee can use with their actual work data; build a structured AI literacy curriculum with role-based training; identify and cultivate champion networks at every organizational level; address the psychological dimensions of change — fear of replacement, AI stigma, and tool fatigue; and establish governance frameworks that enable rather than restrict experimentation. AI Academy from Iternal Technologies is purpose-built to fulfill the 70% with 500+ courses and role-based certifications.

No. Frontier model commoditization actually strengthens the rule. As the 10% (algorithms) becomes increasingly available to all organizations through the same cloud APIs, competitive differentiation shifts entirely to the 70% (people and process). When everyone has access to the same models, the organization that wins is the one whose workforce knows how to use them effectively. BCG research confirms this: "responsible AI implementation triples the chances of capturing full AI benefits" — and responsible implementation is entirely a people-and-process achievement, not a model-selection decision.

John Byron Hanby IV
About the Author

John Byron Hanby IV

CEO & Founder, Iternal Technologies

John Byron Hanby IV is the founder and CEO of Iternal Technologies, a leading AI platform and consulting firm. He is the author of The AI Strategy Blueprint and The AI Partner Blueprint, the definitive playbooks for enterprise AI transformation and channel go-to-market. He advises Fortune 500 executives, federal agencies, and the world's largest systems integrators on AI strategy, governance, and deployment.