The Bodega One Roadmap
This is what we shipped, what we are building right now, and what comes next. No vague promises. No “exciting things coming soon.” Just the actual list.
Beta opens May 2026. Full launch July 6, 2026. See the full changelog for version-by-version history.
Shipped at Launch
Everything below is in the product at general availability on July 6, 2026. If you buy a license that day, this is what you get. See the changelog for the full build log by version.
Editor
- Monaco editor
- Multi-file tabs with drag-to-reorder
- Inline streaming diff -- character-level highlighting as agent writes
- Hunk-by-hunk diff review: accept or reject per code hunk
- FIM code completion as you type (like Copilot, fully local)
- LSP client: IntelliSense, diagnostics, go-to-definition
- Git integration with AI-generated commit messages and PR descriptions
Terminal
- Multiple terminal tabs with xterm.js full emulation
- AI terminal assist: command suggestions and error explanations
- Shell security: risk classification before execution
Chat Mode
- 23 built-in tools: file ops, shell, web search, memory, multi-agent
- Token-by-token streaming with 66x render reduction
- Research mode: web synthesis before responding
- Extended thinking support for compatible models
- Tool call display with expandable cards
- Tool approval cards: approve or reject before execution
- Ghost text prediction in the chat input (Tab to accept)
- Speech-to-text via Ollama Whisper (fully local, air-gap safe)
Autonomous Agent
- Autonomous coding agent with up to 16-iteration agentic loop
- Permission modes: Ask, Plan, and Act
- Multi-agent swarm: spawn parallel worker agents mid-task
- Repo map with PageRank file ranking
- KV cache optimization: 40-70% reuse per session
- Checkpoint system: per-tool file snapshots for rollback
Quality Enforcement Layer (QEL)
- Syntax and semantic checks after every file write
- Proof gates: compile-based verification for Go, Rust, Java, C#
- Structural verifier: catches stubs, incomplete code, hallucinated imports
- Learning service: tracks failures per tool-model to prevent repeat errors
Privacy and Security
- Air-gap mode with 9 enforcement layers -- zero network egress
- Observation masker: strips credentials and PII before LLM calls
- SSRF protection: private IP blocking, non-HTTP protocol blocking
- Shell credential scan: catches leaked keys in terminal output
LLM and Providers
- 10+ provider presets: Ollama, LM Studio, vLLM, llama.cpp, OpenAI, Groq, and more
- BYOLLM: no bundled model, no markup, no usage tracking
- Hardware-aware model recommendations based on detected RAM and VRAM
- Provider auto-detection: scans for running LLM servers on startup
- Fallback routing: automatic failover on provider errors
- OpenAI-compatible API server on port 1337 (Bodega One as a provider)
Memory and Knowledge
- 4-layer memory: session context, project context, user preferences, long-term facts
- Knowledge base: add URLs or text as persistent context sources
- Skills system: YAML-defined custom skills with hot-reload (no restart needed)
- MCP server support: connect any MCP server with tool namespacing
- Per-session and per-project memory scoping
Onboarding and Settings
- First-run wizard: provider auto-detection, model download flow
- Guided tour: highlights key UI elements in both Chat and Code modes
- 60+ configurable settings keys
- Settings import/export as JSON
Q3 2026 - In Progress
These are features we are actively building during the beta period -- July through September 2026. They will ship as updates, not all at once. Pro plan users get early access before general rollout.
Context and Knowledge
- RAG embeddings for knowledge baseSemantic search across your project files -- not just keyword matching.
- Decision logThe agent records why it made each code choice. Reviewable per session.
Interface
- Artifact rendererRender HTML, charts, and diagrams inline in the chat response.
- Advisor panelPersistent side panel for code review and architectural advice.
Cloud Boost
- Budget limits and per-session cost trackingSet a daily or monthly spend cap when routing to cloud LLM providers.
What comes next depends on what you tell us
After Q3, we are not committing to a specific feature list yet. What moves up in the queue is driven by beta feedback. If 50 users hit the same wall, that wall gets fixed before anything else.
We have a list of longer-horizon work -- teams features, deeper agent capabilities, extended editor integrations. We will publish that list when there is something real to show, not as a promise board.
Join Discord to weigh in on what comes nextQuestions about
the roadmap
When does the beta start?+
Beta opens May 2026, first 200 users. Full general availability on July 6, 2026. Join the waitlist to get an early invite.
Will the roadmap change after launch?+
Yes. Priority shifts based on what beta users tell us. The Q3 items above are the ones we consider most important right now. That can change.
Do Pro users get Q3 features earlier?+
Yes. Pro plan users get early access to new features as they land -- before general rollout. If any Q3 items ship during the beta period, Pro users are first.
Can I suggest something for the roadmap?+
Yes. Join the Discord and post in the feedback channel. We read it. Enough signal on a feature moves it up. That is not a polite non-answer -- it has already happened.
Where can I see the full changelog?+
Everything that has already shipped is documented in the changelog with version numbers and dates.
Shape what we build next.
Join the beta. Use the product. Tell us what matters. We read every piece of feedback and it directly influences what moves up in the queue.
Beta opens May 2026. Full launch July 6. Pro users get early access to Q3 features.