Skip to main content
Back to Blog
Continue.devVS CodeJetBrainsAI coding agentpromptingopen sourcedeveloper tools

Continue.dev Prompting Guide (2026)

How to prompt Continue.dev — the open-source AI coding extension. Custom commands, context providers, model routing, and config-driven workflows.

SurePrompts Team
April 20, 2026
11 min read

TL;DR

Continue.dev's strength is customization — out of the box it's a plain assistant, but with custom commands, context providers, and model routing it becomes a personal agent harness. Invest in config first; prompts second.

Prompting Continue.dev well starts before you type a prompt. Continue is an open-source AI coding extension for VS Code and JetBrains, and its defining feature is configurability — a config file that defines custom slash commands, context providers, and per-task model routing. Out of the box it is a generic inline-edit assistant. Tuned, it becomes a personal agent harness shaped to your codebase. The interesting work is deciding what goes into the config, because once you do, the prompts stay short.

What Continue.dev Is

Continue installs into VS Code or JetBrains and adds an AI side panel, inline edits, and slash commands. Three traits set it apart:

  • It is open source. The extension, the defaults, and the config schema are on GitHub. You can read how it works, fork it, and ship your own build.
  • It is config-driven. A single file (user profile or workspace) describes models, commands, context providers, and routing. Everything important is exposed.
  • It is BYO-model. Continue does not lock you to one provider. You wire in Claude, GPT, Gemini, local models via ollama or llamafile, and more — often several at once, routed by task.

For the broader category, see The Complete Guide to Prompting AI Coding Agents and the tool use glossary entry. Continue sits in the same family as Cursor and Windsurf — an IDE-integrated AI assistant — but trades their curated UX for an open, configurable core.

The Config File Is the Prompt

Most coding assistants hide their configuration in a settings UI. Continue puts it in a config file (YAML in current versions, JSON in older ones — check the docs for your install). It holds:

  • Models. Providers, endpoints, and API keys, with aliases you can reference elsewhere.
  • Slash commands. Custom commands like /review or /test mapped to prompt templates and, optionally, specific models.
  • Context providers. Sources of extra context Continue can pull on demand — open files, git diff, terminal output, documentation URLs, and more.
  • Routing rules. Which model handles autocomplete, which handles chat, which handles edits.

The implication: you are not just writing prompts, you are designing a workflow. A tuned config makes the prompts trivially short because the heavy lifting happens before the prompt is sent. A default config puts all the work on the prompt.

Custom Slash Commands

A custom slash command is a named prompt template with optional model routing. You trigger it with /name, optionally followed by arguments, and Continue runs the underlying prompt with whatever context the command declares.

Good candidates are things you ask for repeatedly:

  • /review — a code review pass on the selection, with your team's checklist baked in.
  • /test — generate tests for the current function using your project's testing library.
  • /docs — write or update the doc comment for the current symbol in your house style.
  • /explain — explain the selected code in your team's vocabulary.
  • /commit — draft a commit message for the staged diff in your conventional-commit format.

The payoff is two-sided. The prompt becomes short — you type /test instead of re-describing conventions. And the prompt becomes consistent — everyone who syncs the config runs the same prompt, so the output has the same shape. If you type the same opening paragraph for three prompts in a row, it belongs in a custom command.

Context Providers

A context provider is a named source of context Continue splices into a prompt. Built-ins typically include the current selection, open files, the git diff, recent terminal output, and documentation URLs. The config lets you enable which ones are available; some versions let you register your own.

Three prompting patterns unlock once you know the providers:

  • Explicit context beats implicit context. Reference the diff provider instead of pasting the diff — the prompt stays short and the context stays live.
  • Scoped context beats whole-file context. Pulling only the selection or the changed lines keeps the context lean and the model focused.
  • Documentation as context. Pointing a provider at a library's docs site gives the model a grounded source instead of its training snapshot. Useful for fast-moving libraries.

If a prompt keeps under-performing because "the model does not know X," the fix is usually a context provider, not a longer prompt. Same principle as the .cursorrules pattern in the Cursor prompting guide — stable context in config, prompt about the change.

Model Routing — Different Tasks, Different Models

Continue lets you assign different models to different roles. A common split is:

RoleWhat it doesTypical fit
AutocompleteLine-by-line suggestions as you typeA small, fast model — latency matters more than cleverness
ChatLong-form explanation and back-and-forthA capable hosted model — quality matters more than speed
EditInline edits to code under the cursorA capable hosted model tuned for code edits
ApplyTurns a suggestion into an applied diffA smaller model is often enough — this is mostly formatting
EmbeddingsCodebase indexing for retrievalA dedicated embedding model, often local

Role names change between versions; check the docs for your install. The point is not which roles exist but that you route by task. Paying frontier prices for every autocomplete tick is waste; paying local-model prices for every edit leaves quality on the table. Route so each task goes to a model whose cost/quality ratio fits.

A Plausible Config Snippet

This is illustrative — not a canonical or drop-in config — and shows the shape of a tuned setup. Consult the Continue docs for the schema version your install uses.

yaml
# Illustrative only — shows the shape of a tuned config.
# Field names evolve between versions; check the docs for your install.

models:
  - name: frontier-chat
    provider: anthropic
    alias: chat-primary
  - name: fast-autocomplete
    provider: openai
    alias: autocomplete
  - name: local-fallback
    provider: ollama
    alias: offline

routing:
  chat: chat-primary
  autocomplete: autocomplete
  edit: chat-primary

context:
  - name: currentFile
  - name: openFiles
  - name: gitDiff
  - name: docs
    urls:
      - "https://docs.example-framework.com"

slashCommands:
  - name: review
    prompt: |
      Review the selected code as a senior engineer on this team.
      Use the project review checklist (readability, error handling,
      test coverage, security). Be specific; reference lines.
    model: chat-primary

  - name: test
    prompt: |
      Write tests for the selected function using our testing
      library (Vitest). One happy path, one edge case, one error
      case. Match the style of tests in adjacent files.
    model: chat-primary

  - name: commit
    prompt: |
      Draft a conventional-commit message for the staged diff.
      Type is feat/fix/refactor/docs/test/chore. Body wrapped at
      72 chars. No attribution lines.
    context: [gitDiff]

Notice how little the prompts need to say. /review does not repeat the checklist every call — the checklist lives in the config. /commit does not need the diff pasted in — the gitDiff context provider attaches it. That is the point: move the stable parts of your prompts into config, and your day-to-day prompts get short.

A Tuned Prompt Example

This is hypothetical — not a real session. Shown is how a single prompt looks once the config does the work.

text
[In the Continue chat panel, with the failing test file open.]

> /test selected

[Continue runs the `test` command from the config. It takes the
selected function, pulls the open-file context, routes to the
frontier chat model, and returns tests in the project's style.]

[Reviews the output, applies the diff, runs the tests. One fails.]

> The third test expects an empty-string input to throw, but the
> current implementation returns null. Update the test to match
> current behavior. Leave the other two tests alone.

[Inline edit on the test file, scoped to the third test only.]

Two prompts. Both short. The first leans on a custom command and a context provider. The second is an atomic, scoped instruction. Neither re-describes the project's testing style, because that lives in the config. Same spirit as the atomic-prompt discipline in the Aider prompting guide — make each prompt do one thing.

When Continue Wins

Continue is strongest in specific situations:

  • You want control. The config is open. You can see and change anything. No black-box routing.
  • You want privacy. Local models via ollama or similar keep code on your machine. Continue supports routing some or all roles to local models, so sensitive code can stay off hosted APIs.
  • You have unusual conventions. Custom commands encode house style, team checklists, and domain-specific patterns better than any generic assistant.
  • You mix models. If you already pay for Claude, GPT, and a local model, Continue lets you use all three from one extension without juggling tools.
  • You use both VS Code and JetBrains. Continue runs in both, with a shared config shape.

Where Continue is weaker: a first-day user opens the extension and sees a panel that looks like a plain chat. The magic is behind the config, and until you invest, Continue feels ordinary. Tools like Cursor and Windsurf front-load polish at the cost of openness — fine if polish is what you want, frustrating if control is.

Common Anti-Patterns

  • Skipping the config. Using Continue with defaults and wondering why it does not feel special. Fix: spend an hour writing three custom commands that match your actual workflow. The difference is dramatic.
  • Over-engineering the config. Writing fifteen custom commands on day one, most of which you never use. Fix: start with two or three. Add a new command when you notice you have typed the same opening paragraph three times.
  • Ignoring model routing. Using one frontier model for everything — expensive — or one cheap model for everything — frustrating. Fix: route autocomplete to fast, chat/edit to capable, apply to cheap. Revisit monthly as models change.
  • Stuffing prompts into the chat when they belong in a command. If the same instructions appear verbatim in three prompts, extract them. Fix: move stable instructions into a slash command; keep the prompt for the variable part.
  • Re-pasting context that a provider could supply. Copy-pasting the diff or the file every prompt. Fix: reference the provider. The prompt stays short and the context stays live.
  • Treating the config as frozen. Writing it once and never touching it again. Fix: treat it like a dotfile. Check it in, evolve it, share across machines.

FAQ

How is prompting Continue.dev different from prompting Cursor or Windsurf?

Cursor and Windsurf front-load polish; Continue front-loads openness. With Cursor you lean on @file mentions and a .cursorrules file; with Windsurf you lean on Cascade's flow awareness; with Continue you lean on custom commands, context providers, and model routing that you define. Tuned Continue prompts are often shorter because more of the work lives in the config.

Do I need to use local models?

No. Continue works with hosted models — Claude, GPT, Gemini, and others. Local model support is a feature, not a requirement. Teams with privacy constraints route some or all roles to local; teams without usually stick with hosted. Mixing is common: local for autocomplete, hosted for chat and edits.

Is Continue.dev really free?

The extension is open source. What costs money is the models it calls — route to Claude or GPT and you pay those providers directly, not Continue. Route to local models and there is no per-call cost beyond your hardware. That pay-the-model-not-the-tool pricing is part of why Continue appeals to cost-conscious teams.

How much config is too much?

If you cannot remember what a command does without looking, it is not earning its keep. If you have commands you never use, delete them. A good config is small, memorable, and tuned to your actual workflow — not a kitchen sink. Start with the two or three things you do daily and let the config grow from observed use.

Can I share a config across a team?

Yes — the config is a file, so it checks into a dotfiles or team repo like any other. Many teams keep a shared base with team-standard commands and let individuals layer personal additions on top. That gives a consistent floor — everyone has /review with the team checklist — without forcing a ceiling.

Try it yourself

Build expert-level prompts from plain English with SurePrompts — 350+ templates with real-time preview.

Open Prompt Builder

AI prompts built for developers

Skip the trial and error. Our curated prompt collection is designed specifically for developers — ready to use in seconds.

See Developers Prompts