Skip to main content

Augment Code Alternative

Your completions are gone. Ours run on your machine.

Augment removed inline completions and Next Edit from its Indie, Standard, and Legacy plans on March 31, 2026. Their replacement, Intent, is macOS-only with no Windows or Linux roadmap. Here is what the sunset means for developers. Bodega One runs local FIM completions on all three platforms through models you own. No subscription. No sunset date.

What you lost. What you can get.

Augment Code (Non-Enterprise)

  • Inline completions and Next Edit removed March 31, 2026 for Indie, Standard, and Legacy plans
  • AI Chat and Code Review continue. Augment pivoted to Intent, their agent orchestration product
  • Intent (their replacement) is macOS-only. No Windows or Linux roadmap
  • Monthly subscription still required for remaining features
  • Enterprise plans kept everything. Individual plans lost the editing flow

Bodega One

  • Local FIM completions run on your machine, permanently
  • Full IDE with Monaco editor, AI chat, and autonomous agent
  • One-time purchase: $79 Personal, $109 Pro. No renewal.
  • BYOLLM: 10+ LLM providers. Swap models in seconds.
  • Air-gap mode: 9 enforcement layers. Zero bytes leave your machine.

What Augment Code plans cost now.

Restructured March 2026. Inline completions removed March 31 for all non-Enterprise plans.

Community

Free

AI chat only. Inline completions removed March 31, 2026.

Indie

$20/mo

Single developer. Completions removed March 31, 2026.

Standard

$60/mo

Up to 20 users. More usage credits. Completions removed.

Max

$200/mo

Up to 20 users. Maximum usage caps. Completions removed.

Source: augmentcode.com/pricing. Check for current rates.

Everything Augment dropped. And what it never offered.

Local FIM Autocomplete

Fill-in-the-Middle completions powered by models running on your hardware through Ollama. qwen2.5-coder, codellama, deepseek-coder, and more. No API calls. No rate limits.

Zero Usage Caps

Your machine, your tokens. Generate as many completions as your GPU can handle. No daily limits, no throttling, no "fair use" policies that kick in when you actually need it.

Works With Your Hardware

6 GB of VRAM gets you solid completions with qwen2.5-coder 7B. 16 GB runs the 32B model. Apple Silicon users get MLX-optimized inference. We detect your hardware and recommend the right model.

Privacy by Default

Air-gap mode enforces 9 independent layers of network isolation. Tool filtering, shell blocking, auto-updater blocking, git IPC blocking, and more. Zero bytes leave your machine. Not a promise. An architecture.

Full IDE, Not Just Completions

Monaco editor, AI chat, autonomous coding agent, 23 built-in tools. Completions are one piece. You also get a full development environment with an agent that can read, write, and verify code.

Quality Enforcement Layer

QEL catches what raw completions miss. Structural completeness, contract compliance, language-specific patterns. Every code change gets verified before it hits your workspace.

Built for completions, not bolted on.

FIM-Compatible Models

qwen2.5-codercodellamadeepseek-codercodestralstarcoder2codegemma

How FIM Works

FIM splits your cursor position into prefix (code before) and suffix (code after). The model generates the middle. This is how tab-complete works in tools like Augment and Cursor. The difference: Bodega One runs the model locally.

Debounce and Latency

Adaptive debounce adjusts to your typing speed. On fast hardware (16+ GB VRAM, qwen2.5-coder 7B), expect under 200ms latency. No network round-trip means no variable latency spikes.

Air-Gap Architecture

9 independent enforcement layers. Not one kill switch. Nine separate systems that each block network access independently. Disable one and the other eight still hold.

See our full local LLM guide | Learn about air-gap mode Explore BYOLLM providers

Completions that land clean.

Every file Bodega One writes or completes passes through three verification levels before the change lands. Pattern and compile checks after every write. Micro-proof gates every second write. A full structural verifier at loop end. Not a linter. A verification pipeline.

Incremental Verification

Pattern and compile check after every file write.

Micro-Proof Gates

tsc / py_compile runs every second write.

Full Verification

Structural verifier post-loop. Pass threshold 80 for new files.

Replacing what
Augment Code removed.

  • What are the current Augment Code pricing tiers?+

    As of March 2026: Community (free, AI chat only), Indie ($20/month, single developer), Standard ($60/month, up to 20 users), Max ($200/month, up to 20 users). Inline completions and Next Edit were removed for all non-Enterprise plans on March 31, 2026. Enterprise plans retain completions. See augmentcode.com for current pricing.

  • Is Bodega One free?+

    No. Personal is $79 and Pro is $109. Both are one-time purchases. No subscription, no monthly fees, no renewal. You buy it once and own it. Complete the 14-day beta and you get a $30 promo code by email before launch.

  • Do I need Ollama for local completions?+

    Yes. Ollama runs the local models that power FIM completions. It is free and takes about 2 minutes to install. Once running, Bodega One detects it automatically and lists your installed models.

  • What models work best for FIM?+

    qwen2.5-coder is our top recommendation. It ships in sizes from 1.5B to 32B, so it fits most hardware. codellama, deepseek-coder, codestral, starcoder2, and codegemma all support FIM as well.

  • Can I still use cloud LLMs?+

    Yes. Bodega One supports BYOLLM with 10+ providers including OpenAI, Groq, Together AI, OpenRouter, and Azure OpenAI. Use local for completions and cloud for heavy reasoning. Or go fully local. Your call.

  • Is my data actually safe?+

    Air-gap mode enforces 9 independent enforcement layers. Zero bytes leave your machine. This is not a privacy policy. It is an architecture with nine separate systems that each block network access independently.

  • When is Bodega One available?+

    Beta opens May 2026 for the first 200 users. Full launch is July 6, 2026. Join the waitlist now and you will be first to know when beta invites go out.

Keep your completions. Lose the subscription.

One-time purchase. Local FIM autocomplete. 23 built-in tools. An autonomous agent that verifies its own work. Your code stays on your machine.

Join the Waitlist

Windows, macOS, Linux. Requires Ollama for local completions.