Blog
Thoughts on local-first AI, developer tools, and what we're building.
Gemma 4: Google's first Apache 2.0 open model is also its best
7 min readHow to use our free VRAM calculator for local LLMs
6 min readAI IDE cost comparison: how much are you really paying?
7 min readHow to plan your LLM context window budget
6 min readThe real cost of Kilo Code in 2026
7 min readHow to migrate from Tabnine to Bodega One
8 min readHow to migrate from Windsurf to Bodega One
8 min readAugment Code Is Sunsetting Completions. Here's What to Do Next.
9 min readAre local LLMs actually good enough for real development work in 2026?
8 min readGitHub Copilot vs Cursor vs Bodega One: an honest comparison
9 min readHow to run DeepSeek locally with Bodega One
7 min readAir-gapped AI development for regulated industries
7 min readLM Studio + Bodega One: complete setup guide
6 min readAI coding tools that work completely offline (2026)
8 min readWhich GPU do you actually need for local AI? A developer's guide
7 min readThe best local AI IDEs in 2026: a developer's honest comparison
10 min readAir-gap mode: 9 layers that guarantee zero network egress
6 min readKV cache: how Bodega One gets 40-70% reuse on every LLM session
5 min readSetting up Ollama with Bodega One (and which models to actually run)
8 min readHow QEL works: AI that proves its own code
6 min readThe real cost of AI coding subscriptions vs one-time purchase (3-year analysis)
8 min readBYOLLM: what it means and why it matters
7 min readWhy we built a local-first AI IDE
6 min read