Continue
Open-source AI coding assistant that lets you connect any LLM to your IDE
What is Continue?
Continue is an open-source AI coding assistant that takes a fundamentally different approach from most competitors: rather than locking you into a proprietary model, it acts as a universal connector between your IDE and any AI model you choose. You can wire Continue to OpenAI, Anthropic, Google Gemini, Mistral, or run a local model via Ollama — all through the same interface inside VS Code or JetBrains. This flexibility makes it uniquely appealing to privacy-conscious teams and developers who want to control their AI spend.
The core features cover inline autocomplete, a sidebar chat with codebase awareness, and slash commands for common tasks like generating tests, writing docstrings, or explaining highlighted code. Because Continue is open-source (Apache 2.0 license), teams can fork it, customize behavior, and self-host the configuration server without relying on any external service. The configuration file is a simple JSON document that makes it straightforward to switch models, adjust context window sizes, and add custom prompts.
Continue has gained a strong following in the open-source developer community and among organizations that want to run AI coding assistance on-premises using local models. The JetBrains plugin is particularly valued because it gives IntelliJ and PyCharm users an agentic experience comparable to what VS Code users get with Cursor. Continue is free to use — you pay only for the API calls to whichever model provider you connect to it.
Key Features
- Connect to any LLM: OpenAI, Anthropic, Google, Mistral, Ollama, and more
- Inline autocomplete with configurable model selection
- Codebase-aware sidebar chat with semantic indexing
- Slash commands for test generation, docstrings, and explanations
- Open-source Apache 2.0 license
- Local model support via Ollama for fully offline use
- VS Code and JetBrains plugin support
- Custom prompt and context configuration via JSON
- Multi-file context with automatic codebase indexing
- Team configuration sharing and self-hosted config server
Pros & Cons
Pros
- Complete model flexibility — use any LLM including local models
- Free and open-source with no usage fees beyond model API costs
- Strong JetBrains support gives IntelliJ users an agentic experience
- Highly customizable through simple JSON configuration
Cons
- Requires more setup than plug-and-play alternatives like Copilot
- Quality of suggestions depends entirely on the model you configure
- No managed cloud backend — you handle API keys and billing yourself
- Smaller team than VC-backed competitors means slower feature velocity
Pricing
Model: Free / Open Source
| Plan | Price | Key Limits |
|---|---|---|
| Open Source | $0/mo | Free to use, pay only for model API usage |
| Team Hub | Contact | Centralized team configuration, analytics, support SLA |