100% Local AI Easy Button

AirgapAI is a fully isolated 100% local ChatGPT-like (but more capable) solution, enabling users to leverage their own data securely, at a significantly lower price point than MS Copilot / ChatGPT. AirgapAI only costs $3-4 per user per month versus MS Copilot $20-30 per user per month

The world thinks in files. We think in Ideas®

AI Solution Comparison

Leading AI Productivity Solution Feature Comparison

AirgapAIChatGPTMS Copilot
Price≈$3/user/month$20/user/month$30/user/month
One-Time Perpetual Licences✓ Yes
Runs on External 3rd Party Server✓ No✗ Yes✗ Yes
No Annual Maintenance Fee✓ Yes
On-Prem✓ Yes
Use Your Proprietary Data✓ Yes✓ Yes
Modular Component Data Governance✓ Yes
Patented Data Ingestion✓ Yes
78X LLM RAG Accuracy via Blockify®✓ Yes
100% Local to Device✓ Yes
Role Based Data Provisioning✓ Yes
Legacy PC Support✓ Yes
AI PC Support✓ Yes✓ Yes
Own Your Data✓ Yes
Control of Application and Model Upgrades✓ Yes
Use any Open Source LLM✓ Yes
Control When You Upgrade Your LLM✓ Yes
Use a Fine-Tuned LLM✓ Yes
Quick Start Workflows by Role and Persona✓ Yes
Chat with Multiple AI Personas (Entourage Mode)✓ Yes
Integrates with MS OfficeFuture*✓ Yes

AirgapAI System Requirements

For users who have the latest generation of hardware with either an integrated GPU or dedicated GPU, AirgapAI offers a large language model inference capability that allows a user to run common open source LLMs 100% locally. Depending on the performance of the hardware, different speeds of LLM output will be experienced (better hardware = faster generation). Running a LLAMA 3.2 3B Default Model on Intel HD Integrated Graphics with 8GB vRAM results in a generation speed of approx. 100+ words per minute as tested on the Intel® Core™ Ultra 7 165U, vPro®.

For users who do not have latest generation hardware with an integrated GPU, dedicated GPU, AirgapAI offers a CPU based vector search capability called Rapid Answers, which can run on any modern CPU (2016 or later).

Recommended/Required specifications (assumes device is not running other compute intensive tasks):

  • Large Language Model Inference
    • LLAMA 3.2 3B Default Model – Intel HD Integrated Graphics with 8GB vRAM (minimum)
    • LLAMA 3.2 1B Default Model – Intel HD Integrated Graphics with 8GB vRAM (minimum)
    • GEMMA 9B – NVIDIA RTX 5000 or similar from AMD
    • LLAMA 3 8B – NVIDIA RTX 5000 or similar from AMD
  • Rapid Answers IdeaBlock Vector Search
    • Any Modern CPU 2016 or later