Skip to main content
setuplocal-firstBYOLLM

LM Studio + Bodega One: complete setup guide

Bodega One6 min read
Quick answer

LM Studio runs a local model server on port 1234 by default. In Bodega One, go to Settings → Providers → LM Studio, set the base URL to http://localhost:1234/v1, and select your loaded model. Total setup: under 5 minutes. See all supported providers.

LM Studio is one of the best ways to run local LLMs, especially on macOS with Apple Silicon. The GUI makes model management easy, and it exposes an OpenAI-compatible API that works with anything that supports the OpenAI endpoint format, including Bodega One.

This guide covers the full setup: installing LM Studio, picking the right model for your hardware, and wiring it to Bodega One so the AI chat and autonomous agent both use your local model.

Step 1: Install LM Studio

Download LM Studio from lmstudio.ai. It runs on macOS (Apple Silicon and Intel), Windows, and Linux. The Apple Silicon build takes advantage of Metal for GPU acceleration. If you have an M-series Mac, this is probably the fastest path to running a good local model.

Step 2: Pick a model for your hardware

LM Studio has a model browser built in. Search by name or browse by size. For coding tasks, these work well:

  • 8GB RAM / 8GB VRAM: Qwen3-8B (Q4_K_M), strong reasoning and good code
  • 16GB RAM / 12GB VRAM: Qwen2.5-14B (Q4_K_M), noticeably stronger on complex tasks
  • Apple Silicon 16GB: Qwen3-8B MLX (MLX build runs faster than GGUF on M-series)
  • Apple Silicon 64GB+: Qwen2.5-Coder-32B MLX, the gold standard for local coding
  • 24GB+ VRAM: Qwen2.5-Coder-32B (Q4_K_M), competitive with GPT-4o on coding

For a complete breakdown by hardware tier, see the GPU guide for local AI.

Step 3: Load the model and start the server

In LM Studio, click “Load” next to your chosen model. Once it's loaded, go to the “Local Server” tab (the icon that looks like a server rack on the left sidebar). Click “Start Server”. LM Studio will start an OpenAI-compatible server, typically on http://localhost:1234.

You can verify it's running by visiting http://localhost:1234/v1/models in your browser. You should see a JSON response listing the loaded model.

Step 4: Connect Bodega One

Open Bodega One and go to Settings → Providers. You'll see a list of provider presets. Select LM Studio.

The default configuration should work without changes. Bodega One defaults to http://localhost:1234/v1 for LM Studio. If your LM Studio server is on a different port, update the base URL to match.

Click “Test Connection”. If the connection is successful, you'll see the model name appear in the provider status. If it fails, double-check:

  • LM Studio server is running (check the Local Server tab)
  • A model is actually loaded in LM Studio (not just downloaded)
  • The port in Bodega One matches the port LM Studio is using
  • No firewall is blocking localhost connections

Step 5: Test it

Open the AI chat panel in Bodega One and send a test message. If you get a response, you're fully connected. The autonomous coding agent will also use the same provider configuration. Open a project and try a small task to confirm.

Common issues

Context length errors

Some models have shorter context limits than Bodega One's default context window. If you see a context length error, open Settings → Agent and reduce the maximum context tokens to match your model's limit. Qwen3-8B supports 32k tokens. Qwen2.5-Coder-32B supports 128k.

Slow responses

If responses are slow, the model may be running on CPU instead of GPU. In LM Studio, check the “Hardware” panel. You want to see GPU layers loaded, not zero. If GPU layers show 0, your VRAM may be fully allocated to another process, or the model may be too large for your GPU.

Model not appearing in Bodega One

Bodega One fetches the model list from the LM Studio API. If you loaded a new model after connecting, refresh the model list in Settings → Providers → LM Studio.

What else you can do

Once LM Studio is connected, you have a fully local AI coding environment. The air-gap mode in Bodega One will work with no changes. Since LM Studio is local, enabling air-gap mode simply blocks any remaining outbound paths while keeping your LM Studio connection intact.

LM Studio is one of 15+ supported LLM providers in Bodega One. If you want to switch to Ollama, vLLM, or a cloud provider for comparison, the process is the same: open Settings → Providers and pick a different preset.

Ready to own your tools?

Beta opens May 2026. Complete 14 days and earn a $30 promo code.