Skip to main content

Everything you need to get started.

Bodega One is built to be picked up fast. Install it, connect your LLM, and you're building with AI in minutes — no config files to wrestle with.

Up and running in three steps.

No Docker. No environment setup. No CLI.

  1. Download and install

    Grab the installer for your platform — Windows, macOS, or Linux. Double-click, follow the prompts. No PATH setup, no CLI required.

  2. Connect your LLM

    Open Settings → LLM Providers. Pick a preset (Ollama for local, OpenAI or Anthropic for cloud) and paste your API key or endpoint URL. Takes about 30 seconds.

  3. Start building

    Open a folder in Code Mode or start a conversation in Chat Mode. The AI has full access to your files and 23 built-in tools from the first message.

Beta ships May 2026. Join the waitlist to be first in line.

Four things to understand.

These aren't marketing terms — they're the actual architecture of how Bodega One works.

Code Mode

Full IDE with an AI agent

Monaco editor, file tree, multi-terminal, and an autonomous coding agent in one window. The agent writes real diffs — not suggestions. You review and apply.

Chat Mode

Conversational AI with real tools

Full-screen AI chat with persistent memory and 23 built-in tools. The AI can read files, run shell commands, search the web, and more -- directly from the conversation.

QEL

Quality Enforcement Layer

Every code change the agent writes passes through 5 verification stages before you see it: contract extraction, incremental checks, proof gates (tsc, pytest, py_compile), and targeted line-level repair. The AI can't game its own checks.

How QEL works →

BYOLLM

Bring Your Own LLM

Connect any of 10+ supported providers. Run Ollama locally for full privacy. Switch to Claude for complex reasoning. Swap models any time — never locked in.

What BYOLLM means →

Connect any model you want.

10+ provider presets built in. Open Settings → LLM Providers, pick a preset, enter your key or local endpoint. Takes about 30 seconds.

Ollamarecommended

Runs fully local — best for privacy

LM Studio

Local models with a GUI

OpenAI

GPT-4o, o1, and more

Anthropic

Claude 3.5, Claude 4 series

Groq

Fast inference for open models

Together AI

Open models at scale

MistralAI

Mixtral and Mistral models

Gemini

Google Gemini Pro and Flash

DeepSeek

DeepSeek-V3 and Coder

Llama

Meta Llama 3.x series

Using Ollama? That's the fastest path to full privacy.

Install Ollama, pull a model (ollama pull llama3.2), then set the endpoint to http://localhost:11434 in Bodega One. Nothing leaves your machine.

Still have questions?

Discord is the fastest way to get answers from the team and other beta users. Or join the waitlist and we'll walk you through setup on day one.