AI-Washing: The New Greenwashing — And 7 Questions That Expose It
Every VAR, MSP, and reseller now claims AI expertise. Most are lying — not maliciously, but because market pressure to claim AI capability vastly outpaces the time required to actually build it. The result is an epidemic of AI-washing: partners who have added AI to their marketing materials without building the competency to deliver. These 7 diagnostic questions, drawn from Chapter 11 of The AI Strategy Blueprint, expose AI-washers before you hand them a purchase order.
What Is AI-Washing and How Do You Detect It?
AI-washing is the practice of adding AI terminology, logos, and claims to marketing materials without building genuine AI competency. The term is analogous to greenwashing in environmental marketing: the surface appearance of a capability that does not exist in substance. It is epidemic in 2026 because every technology reseller faces pressure to claim AI expertise, and the time required to actually build that expertise — typically 18–24 months of deliberate investment — exceeds the urgency of the marketing opportunity. Detection requires specificity: genuine AI partners answer questions about engagements, challenges, and outcomes with detail drawn from experience; AI-washers respond with vendor talking points, general claims, and deferral. The 7 exposure questions below operationalize this diagnostic into a conversation framework you can use in the first 30 minutes of any partner meeting. See also: The 10-Point AI Partner Evaluation Checklist.
Jump to the 7 QuestionsWhat Is AI-Washing?
In environmental marketing, greenwashing refers to the practice of making misleading or unsubstantiated claims about the environmental credentials of a product or organization. Companies facing pressure from consumers and regulators to demonstrate environmental responsibility find it faster to update their marketing materials than to change their actual practices.
AI-washing is the same pattern applied to the technology channel. As enterprise demand for AI capability accelerates, every technology reseller, VAR, MSP, and systems integrator faces pressure to claim AI expertise. The fastest path to meeting that market expectation is not building an AI practice — it is adding AI to existing marketing materials, signing up for vendor partner programs, and deploying AI terminology throughout sales conversations.
“Partners who have added AI to their marketing materials without building genuine competency consume your resources while delivering little. The evaluation frameworks in this chapter ensure you distinguish between these outcomes before committing.”
— John Byron Hanby IV, The AI Strategy Blueprint, Chapter 11
The practical consequence for enterprise buyers: a partner who claims AI capability that does not exist in substance becomes your organization's ceiling. You cannot access AI excellence through a partner who has not built AI excellence themselves. The resources you invest in an AI-washing partner — time, budget, organizational energy, leadership attention — are resources you cannot deploy with a genuine AI partner.
AI-washing is not merely inconvenient. Every quarter spent with a partner who cannot deliver is a quarter your competitors spend with partners who can. In a market where AI first-mover advantage compounds, partner selection quality is a strategic differentiator.
Chapter 11 of The AI Strategy Blueprint provides the complete detection and selection framework. This article focuses specifically on the AI-washing detection component: the diagnostic questions, red flags, and authenticity markers that separate genuine AI partners from those who have adopted the vocabulary without building the practice.
Why AI-Washing Is Epidemic in 2026
According to BCG, 84% of organizations now work with two or more vendors on AI initiatives. This multi-vendor reality signals mainstream enterprise AI adoption — and with mainstream adoption comes mainstream demand pressure on every technology partner, regardless of their actual readiness to meet it.
The AI-washing epidemic has three structural drivers that are not going away:
The Competency Gap
Building a genuine AI practice takes 18–24 months of deliberate investment in personnel, certifications, ISV relationships, and methodology development. Market pressure to claim AI capability arrived in 2023. The math produces a multi-year gap during which partners can claim what they have not yet built.
The Relationship Moat
Incumbent technology partners hold deep organizational knowledge, established trust, and existing procurement relationships. End customers often prefer extending existing relationships over onboarding new partners. This moat enables AI-washing partners to delay capability investment while continuing to win business on relationship inertia.
The Measurement Deficit
Most organizations lack the frameworks to evaluate AI partner competency at the point of selection. Without specific evaluation criteria, partner selection defaults to brand recognition, relationship quality, and price — dimensions that have nothing to do with AI capability. AI-washing thrives where evaluation rigor is absent.
“The AI market has attracted partners who have added AI branding to their marketing without building substantive capability. This phenomenon, analogous to greenwashing in environmental claims, presents genuine risk for organizations seeking AI partners.”
— John Byron Hanby IV, The AI Strategy Blueprint, Chapter 11
The good news: AI-washing is detectable. The diagnostic is not sophisticated or time-consuming. It requires asking specific questions and evaluating the specificity of the answers. Genuine AI partners speak from experience. AI-washing partners speak from vendor materials. The 7 questions below make this distinction visible in a single conversation.
For the broader context of how partner selection fits within your AI strategy, see our pillar guide: Enterprise AI Strategy: The Complete Framework. For the governance framework that determines which AI solutions are even permissible within your organization, see Building an AI Governance Framework.
The 4 Red Flags of AI-Washing
These four patterns appear consistently in AI-washing partners. Each is observable in a single partner meeting. None requires technical expertise to identify. The more of these flags you see in a single partner conversation, the lower the probability that genuine AI competency exists behind the marketing.
Vendor Partnerships Without Implementations
The AI-washer leads every conversation with vendor partnership announcements: “We are an [ISV] partner,” “We recently signed a reseller agreement with [AI vendor].” These announcements are not lies — the partnerships typically exist. But they describe signing an agreement, not building a practice. When you ask how many implementations they have completed on that platform, the number is zero, one, or “we are working on our first.” Partnerships require investment to join. Implementations require competency to execute.
Generic Claims, Zero Specificity
AI-washing partners speak in AI generalities that cannot be falsified: “We have deep expertise in AI,” “We help organizations with their AI journey,” “AI is a core part of our strategy going forward.” These statements are structurally identical to what every partner says regardless of actual capability. Genuine AI partners speak with specificity: a named engagement, a described challenge, a quantified outcome, a lesson learned from experience. Specificity is the diagnostic. If a partner cannot be specific, they do not have experience to draw on.
References Unavailable, Delayed, or Irrelevant
Ask for AI-specific customer references and observe what happens. A genuine AI partner responds with names and contact information immediately, because they have completed implementations and their customers are willing to speak to it. An AI-washing partner responds with delay (“We will need to check with our customers”), deflection (offering general technology references rather than AI-specific ones), or irrelevance (references who, when called, describe basic configurations that any junior technician could complete). The speed and quality of reference availability is itself a diagnostic.
No AI Practice, No AI Use
The most reliable single indicator of AI-washing: a partner who does not use AI in their own operations. Genuine AI partners integrate AI into their sales process, proposal generation, customer research, and internal workflows. They use the products they sell. They have operational experience with the adoption challenges their customers will face. AI-washing partners have added AI to their pitch deck but have not deployed it internally. Ask directly: “How does your team use AI in your own business?” Vague answers confirm the diagnosis.
These four red flags are observable without technical knowledge. You do not need to understand the difference between RAG and fine-tuning to detect AI-washing. You need to ask questions that require experience to answer — and observe whether the answers come from experience or from a vendor slide deck.
The 7 Exposure Questions
Ask these seven questions in any partner evaluation conversation. For each question, the table shows what a genuine AI partner sounds like versus what an AI-washer sounds like. The diagnostic is not the question itself — it is the specificity, confidence, and operational grounding of the answer.
Names a specific customer (often by category for confidentiality), describes the use case in detail, identifies the specific challenges encountered during implementation, and quantifies the outcome. The narrative is non-linear — real experience includes unexpected complications. The timeline, team composition, and support model are specific.
Describes a use case in generic terms without customer specificity. Cannot identify implementation challenges because there was no implementation. Outcome is described in unmeasurable terms (“customers are very happy”). References a pilot still in progress rather than a completed deployment.
Describes specific operational uses: proposal generation with a named AI tool, customer research automation, support ticket summarization, meeting intelligence. The examples are current and specific. The partner can describe what worked, what did not, and what they learned. They have the practitioner's vocabulary: inference speed, prompt engineering, accuracy tuning, change management challenges.
Uses ChatGPT occasionally for email drafting. Has attended vendor webinars about AI. Is “evaluating several options” for internal deployment. Cannot describe operational AI use because they have not deployed it. The gap between what they sell and what they use is visible in this answer.
Names specific individuals, specific certification programs (not just “Microsoft AI” but the specific exam designation), and recent dates. Can describe what the certification required: a learning curriculum, a practical exam, a hands-on project. Certification documentation is available to share. Multiple team members hold certifications, not just one AI champion.
References vendor partner status (“we are a certified partner”) rather than individual certifications. Cannot name specific certified individuals. Certifications are in progress. The one person who held an AI certification left the company. Documentation will require time to compile.
Presents a documented methodology with named phases, deliverables, and timelines. Can describe how the methodology evolved based on implementation experience. Has templates, worksheets, and playbooks. Describes post-deployment adoption support as a distinct practice area with its own team and approach. The methodology handles regulated industries differently than commercial deployments.
Presents a generic project management framework relabeled for AI. References the vendor's implementation guide as their methodology. Cannot describe how their approach handles the specific challenges of AI adoption (change resistance, accuracy calibration, data quality). Post-deployment support is “we are always available.”
Answers with technical specificity: describes the data architecture of the proposed solution, the transmission model, the data residency, and the contractual obligations around data deletion. For regulated industries, has the compliance documentation ready. If the solution transmits data externally, can articulate the security controls and can also offer on-premises or air-gap alternatives for sensitive use cases.
Defers security questions to the ISV. Assures you the ISV is “SOC 2 certified” without knowing what that means for your specific data. Cannot distinguish between different data architectures (cloud, hybrid, on-premises, air-gap). Treats all AI security questions as IT questions rather than AI-specific risk questions.
Provides a specific number or credible range. Can break down the figure between product resale, implementation services, and ongoing managed support. Describes the trajectory — how revenue grew quarter over quarter as the practice matured. The benchmark from Chapter 11: vTECH io generated $5–6M in net new AI revenue in Year 1 of a genuine AI practice. Not every partner will reach this level, but a genuine practice should have measurable AI-specific revenue.
Conflates AI revenue with total technology revenue. Cannot separate AI-specific bookings from general IT resale. Cites pipeline (“we have a lot of opportunities in the pipeline”) rather than closed revenue. Revenue from AI-specific engagements is not tracked separately because the AI practice is not treated as a distinct business unit.
Describes a genuine bi-directional ISV relationship: the ISV refers opportunities, co-sells on named accounts, provides dedicated technical resources for implementation support, and jointly develops go-to-market materials. The partner can articulate specific reasons for their ISV selection — security architecture, deployment model, use case fit — that reflect strategic thinking rather than availability. The relationship has a documented history measured in implementations, not sign-up date.
Describes a one-way relationship: the partner signed up for the ISV's program. The ISV does not refer opportunities to them because they have not demonstrated the customer outcomes that earn ISV trust. ISV selection rationale is availability and price. The “partnership” is a reseller agreement, not a co-built go-to-market motion.
The AI Strategy Blueprint
Chapter 11 of The AI Strategy Blueprint contains the complete AI-washing detection framework alongside the ISV evaluation matrix, the vTECH io case study, and the 10-point partner evaluation scorecard. If you are navigating AI partner selection, this chapter alone is worth the price of the book — available on Amazon for $24.95.
What Authenticity Looks Like: The vTECH io Contrast
The opposite of an AI-washer is a partner who has built genuine AI capability with measurable results. vTECH io, a technology solutions provider serving 1,300 customers across Florida, Georgia, Ohio, Texas, and Alabama, provides the clearest available contrast in the channel AI market.
Under the leadership of Chris McDaniel, Chief Revenue Officer, vTECH io developed a deliberate AI strategy rather than merely adding AI to existing marketing materials. The results validate the investment: $5–6 million in net new AI revenue within their first year, approximately 15% growth in total company revenue, 300%+ year-over-year AI PC sales increase, and consulting revenue that covered all bundling costs within eleven months.
“Every PC you get from us is AI ready. That’s our message. That’s our marketing. That’s our go-to-market strategy.”
— Chris McDaniel, CRO, vTECH io
Notice what this message is not: it is not “we are an AI-capable partner.” It is not “AI is part of our strategy going forward.” It is a specific, operational commitment: every hardware transaction is an AI transaction. This is the difference between marketing language and a genuine go-to-market motion.
| Dimension | AI-Washer | Genuine Partner (vTECH io Benchmark) |
|---|---|---|
| Message | “We are AI-capable” | “Every PC we sell is AI ready” — specific operational commitment |
| Investment | Reactive — responds to customer requests | Proactive — purchased bulk licensing before demand materialized |
| Engagement | Ad hoc — mentions AI when it comes up | Systematic — contacts every PC customer 2 weeks post-delivery to introduce AI |
| Revenue (Year 1) | Not tracked separately; conflated with general IT | $5–6M net new AI-specific revenue |
| Services | Products only; post-sale support is general IT | Consulting practice self-funding within 11 months |
| Internal AI Use | Minimal; not operationally deployed | Internal learning hub; team trained before customer conversations |
The vTECH io case study is documented in full in Chapter 11 of The AI Strategy Blueprint and in our companion article: The 10-Point AI Partner Evaluation Checklist. For the complete selection framework including the ISV evaluation matrix and tiered partner qualification process, that article is the logical next read.
“Effective AI Partners Use AI Themselves Before Selling It.”
Chris McDaniel emphasized this principle as the clearest differentiator between AI-washing partners and genuine ones. The full quote from Chapter 11 of The AI Strategy Blueprint:
“Effective AI partners use AI themselves before selling it. Partners who have integrated AI into their own operations understand implementation challenges, adoption barriers, and value realization patterns from direct experience. Partners who sell AI without using it cannot speak credibly to what customers will encounter.”
— John Byron Hanby IV, The AI Strategy Blueprint, Chapter 11 (citing vTECH io experience)
The self-use test is the single most time-efficient AI-washing detection method available. It requires one question (Q2 in the 7-question framework above) and produces a definitive diagnostic in two minutes. Partners who use AI in their own operations answer with operational specificity that cannot be manufactured from vendor materials. Partners who have not used AI internally reveal the gap through vague generalities and deferred examples.
A partner who has navigated their own AI adoption has a map of the terrain you will traverse. They have encountered the adoption resistance from team members who prefer existing workflows. They have debugged prompts that produced unexpected outputs. They have managed the change management challenge of introducing AI to employees who fear it. These experiences are prerequisites for guiding you through the same journey. Partners without them are navigating blindly and billing you for the education.
vTECH io invested heavily in internal training before their first customer AI conversation, creating what they describe as a learning hub within their organization. This internal deployment gave their team direct experience with implementation challenges, adoption barriers, and value realization patterns — credibility that translated directly into customer confidence and faster time-to-value in deployments.
The implication for your partner evaluation process: the self-use test is not supplementary to the 7 questions above. It is the entry gate. Partners who fail the self-use test should not advance to the full evaluation. The gap between their AI claims and their AI reality is too wide to bridge in the time your initiative requires.
How to Transition From an AI-Washer to a Genuine Partner
Detecting AI-washing is the diagnostic half of the partner evaluation equation. The practical half is: what do you do about it? Organizations facing an AI-washing incumbent partner have three options, each with different economics and risk profiles.
“Choose your partners with the same deliberation you would apply to hiring your leadership team; the consequences compound just as significantly.”
— John Byron Hanby IV, The AI Strategy Blueprint, Chapter 11
Regardless of which path you choose, the first step is the same: apply the 7 exposure questions above to your current partner honestly. If they fail Question 2 (self-use) and Question 6 (AI-specific revenue), the conversation about your path forward is necessary regardless of how long the relationship has existed. The partner's AI capability constrains your organization's AI potential — and that constraint compounds over time.
For the complete partner selection framework including the 10-point scorecard, ISV evaluation matrix, and tiered qualification process, see: The 10-Point AI Partner Evaluation Checklist. For the companion book resource to share with incumbent partners: The AI Partner Blueprint.
What Authentic Partner Deployments Produce
Real deployments from the book — quantified outcomes from Iternal customers across regulated, mission-critical industries.
Defense Contractor M&A AI Due Diligence
A major defense contractor used AirgapAI during M&A due diligence to analyze thousands of pages of target company documentation without exposing sensitive materials to external environments — the exact opposite of AI-washing in practice.
- Due diligence cycle compressed from weeks to days
- Zero data transmission to external environments
- Authentic partner-led deployment in a SCIF-adjacent context
County Government Citizen Services
A county government deployed AI for citizen services through a channel partner who had invested in genuine AI practice — deploying across five counties in a single day, then scaling to 4,500 users.
- Five counties deployed under $2,500 per county
- Scaled from pilot to 4,500-user discussion
- Channel partner with real SLED AI track record
Enterprise Agility: Multi-Use Case Deployment
An enterprise customer deployed AirgapAI across multiple use cases through a channel partner who had built genuine AI delivery methodology — demonstrating the time savings achievable with authentic partner execution.
- Multiple use cases in production simultaneously
- Measurable time savings across departments
- Land-and-expand from initial pilot to enterprise
Build the AI Literacy That Makes Partner Selection Defensible
Recognizing AI-washing requires foundational AI literacy — knowing enough to evaluate whether a partner's claims are operationally grounded. The Iternal AI Academy builds that literacy across every organizational role.
- 500+ courses across beginner, intermediate, advanced
- Role-based curricula: Marketing, Sales, Finance, HR, Legal, Operations
- Certification programs aligned with EU AI Act Article 4 literacy mandate
- $7/week trial — start learning in minutes
AI Partner Selection and Evaluation Consulting
Our AI Strategy consulting programs include structured partner evaluation, AI-washing audits of incumbent relationships, and ISV selection guidance — delivered as a 30-day Sprint or 6-month Transformation Program.
Frequently Asked Questions
AI-washing refers to the practice of channel partners, VARs, and MSPs adding AI claims to their marketing materials without building genuine AI competency. The term is analogous to greenwashing in environmental marketing: the surface appearance of a capability that does not exist in substance. AI-washing is driven by market pressure — every technology partner faces customer demand for AI capability that vastly outpaces the 18–24 months required to build a genuine AI practice. The result is a market filled with partners who claim AI expertise they cannot deliver, consuming customer resources while producing limited value.
The fastest single test is the self-use question: "How does your team use AI in your own business today? Give me a specific example from this week." Genuine AI partners use the products they sell. They have operational experience with AI in their own workflows — sales automation, proposal generation, customer research, support operations. AI-washing partners have added AI to their pitch deck without deploying it internally. They cannot describe specific operational uses because there are none. This single question typically reveals the answer in two minutes and determines whether the full 7-question evaluation is worth proceeding with.
The difference is transparency and honesty. A partner legitimately early in building an AI practice will acknowledge their current capability level, describe their investment roadmap, and be clear about what they can and cannot deliver today. AI-washing involves claiming capability that does not exist — implying or asserting AI expertise that the partner has not earned. The diagnostic question is: does the partner describe their AI capability accurately, or do their claims exceed their demonstrated track record? Legitimate emerging AI partners are also appropriate for Tier 3 consideration in your partner tiering model — suitable for monitoring and low-risk exploratory engagements while they build their practice.
Yes, and this is often the most efficient path forward for incumbents with strong environmental knowledge and genuine intent to improve. The AI Partner Blueprint by John Byron Hanby IV provides a complete capability development roadmap for channel partners building AI practices. Sharing the book signals your organization's AI transformation commitment and gives the partner a structured development path. This approach acknowledges a practical reality: partners with deep organizational knowledge often deliver better outcomes than technically superior partners who must learn your environment from scratch. See the transition options in this article for a framework for deciding between development, parallel tracking, and replacement.
The benchmark from Chapter 11 of The AI Strategy Blueprint is the vTECH io case study: $5–6 million in net new AI revenue in Year 1 of a genuine AI practice, representing approximately 15% growth in total company revenue, with 300%+ AI PC sales increase year-over-year and consulting revenue covering all bundling costs within 11 months. Not every partner will reach these figures, but a genuine AI practice should have measurable AI-specific revenue, completed implementation count, and customer expansion patterns they can describe. Partners who cannot quantify their AI-specific results have not built a practice — they have added AI to their marketing.
The AI Partner Blueprint by John Byron Hanby IV is a companion book to The AI Strategy Blueprint, written for channel partners building AI practices. Its relevance to AI-washing is twofold. First, it defines exactly what a genuine AI practice requires — giving enterprise buyers a standard against which to evaluate partner claims. Second, it provides a development roadmap for incumbent partners who are willing to build genuine capability. Sharing the book with an AI-washing partner who wants to improve creates accountability and urgency. Partners who read it and act on it are demonstrating the type of investment that distinguishes genuine AI practice development from continued marketing claims. Available at iternal.ai/ai-partner-blueprint.