Skip to main content

Context Window Planner

How much code fits in one prompt?

Pick a model, describe your project, and see the math. No sign-up. Runs in your browser.

Calculator

1. Pick a model

2. Describe your codebase

3. Language

Results

032.8K tokens
CodeConversationSystem + Response
Your codebase
96.0K tokens
Context window
32.8K tokens
Available for code
23.8K tokens
Coverage
24.8%

120 files x 80 lines x 10 tok/line = 96.0K tokens

That is roughly 29 files worth of code per prompt (at 80 lines/file).

Semantic search

Use search to find relevant code snippets. Don't try to stuff everything in. Let the tool pull what it needs.

Same codebase, different models

How coverage changes across context window sizes with your 96.0K-token codebase.

ModelContextAvailableCoverageStrategy
Phi-4-mini16K11.5K12%Semantic search
Qwen2.5-Coder-32B32K23.8K24.8%Semantic search
Qwen3-14B128K97.5K100%Whole-repo context
Llama 3.3 70B128K97.5K100%Whole-repo context

How the math works

Token estimate

Lines of code multiplied by a tokens-per-line ratio for your language. Python averages ~8 tokens/line, Java ~11. These are empirical averages across Llama, Qwen, and Mistral tokenizers.

Budget split

Not all context goes to code. 15% is reserved for conversation history, 10% for the model's response, and ~800 tokens for the system prompt. The rest is what you can fill with source code.

Strategy

Over 80% coverage? Paste the whole repo. Between 30-80%? Point at specific files. Under 30%? Use search or a retrieval-augmented workflow. The tool tells you which.

Let Bodega One manage the context for you.

Bodega One reads your project, builds a semantic index, and pulls in the right files automatically. No manual context stuffing. Runs on your machine. One-time purchase.

Join the Waitlist

Token estimates are approximate. Actual tokenization varies by model and content. Estimates use average tokens-per-line ratios by language. Last updated March 2026.