Donate to support freedom.

VSCodium + Ollama: Private AI Coding in Your IDE

Set up Continue and local models for a fast, offline workflow

Build a private, local AI coding setup with VSCodium, Continue, and Ollama. Learn why local models win on privacy, cost, and control, and how to set it up in minutes.


Time to read: 6 min

VSCodium + Ollama: Private AI Coding in Your IDE

VSCodium + Ollama: A Private AI Coding Stack

We all want the modern AI-agent experience: faster refactors, quicker debugging, fewer context switches, and a bit more time for actual thinking. The surprise is that you can get most of that without sending your code to a cloud service. This guide shows how to build a local, offline workflow with VSCodium, Continue, and Ollama, plus the real-world tradeoffs.

If you want a solid AI assistant inside your IDE, you do not need Copilot or a subscription. You can get a fast, private, local workflow with:

  • VSCodium (telemetry-free VS Code builds)
  • Continue (in-editor AI assistant)
  • Ollama (local model runtime)

This trio gives you AI help without sending your code to external servers.

Advantages and disadvantages

Advantages

  • Privacy by default and offline work.
  • Predictable costs and no vendor lock-in.
  • Full control over model choice, prompts, and context.

Disadvantages

  • Hardware limits how large and fast your models can be.
  • Quality varies across models; you may need to try a few.
  • You are responsible for updates: new releases like Qwen 3 Coder can appear quickly, and you decide when to upgrade.
  • You still need clear prompts and scoped tasks to get good results.

Why VSCodium instead of VS Code

VS Code's source is MIT licensed, but Microsoft's distributed binaries add a separate license and enable telemetry by default. VSCodium ships ready-to-use builds with telemetry disabled, so you start with a privacy-first editor and no extra setup.

Why local AI beats cloud for many teams

Local models do not always beat the best cloud models on raw intelligence. But they win on what matters day-to-day:

  • Privacy by default: your code stays on your machine.
  • Lower cost: no subscriptions or per-seat fees.
  • Offline work: keep shipping even without internet access.
  • Control: pick the model, tune prompts, and switch anytime.

The local stack: VSCodium + Continue + Ollama

Continue lives inside the editor and handles chat, code edits, refactors, and explanations. Ollama runs the model locally and exposes it to Continue. Together they give you a full local AI workflow with zero cloud dependencies.

Ollama installation (more detail)

macOS

  1. Install with Homebrew: brew install ollama
  2. Start the service: ollama serve
  3. Verify the install: ollama -v

Linux

  1. Install with the official script: curl -fsSL https://ollama.com/install.sh | sh
  2. Start the service: ollama serve
  3. Verify the install: ollama -v

Windows

  1. Download the installer from https://ollama.com and follow the prompts.
  2. Open a new terminal and run: ollama -v
  3. Start the service: ollama serve

First model pull

  • Run ollama pull llama3.1:8b (or another model listed below).
  • Use ollama list to see what is installed.

10-minute setup checklist

  1. Install VSCodium.
  2. Install Ollama.
  3. Pull a model (start small if your laptop is modest).
  4. Install the Continue extension.
  5. Point Continue to your local Ollama endpoint.

Model recommendations (up to 8B)

For most laptops, 8B models are the sweet spot: strong enough for coding help and small enough to run on typical hardware. On a Mac with at least 16GB RAM, 8B models run well even without a dedicated GPU.

Good starting points:

  • llama3.1:8b for general coding and refactors.
  • qwen2.5-coder:7b or qwen2.5:7b for compact, fast edits.
  • deepseek-coder:6.7b for code completion and small fixes.

If your machine is very weak, try 3B to 4B models first. If you need deeper reasoning or larger context, consider 14B or 32B models, but expect slower performance and higher memory usage.

Recommended workflow for speed and quality

Here is a simple workflow that keeps things fast and surprisingly effective:

  • Use a smaller model for quick completions and short edits.
  • Use a larger model only when you need deeper reasoning.
  • Limit context to open files and target folders, not your whole disk.
  • Add a short rules prompt to keep changes clean and predictable.

Suggested rules prompt:

follow existing project style
do not invent APIs
prefer minimal diffs
ask when unsure

Use AGENTS.md for consistent results

Local agents work best when they have a short, consistent rule set. Many teams keep an AGENTS.md file in the repo root so the assistant sees the same guidance every time. Example template:

# AGENTS.md

## Goals
- Keep changes minimal and focused.
- Preserve existing code style and structure.
- Ask before making broad refactors.

## Behavior
- Do not invent APIs or dependencies.
- Prefer small, testable diffs.
- Call out assumptions and missing context.

## Context
- Use only files referenced in the prompt unless told otherwise.
- For large changes, propose a plan before editing.

This helps your local assistant behave consistently across tasks and teammates.

Limitations to expect

If you want the honest version, here it is:

  • Hardware matters: RAM and VRAM limit which models you can run.
  • Expect tradeoffs between speed, quality, and context length.
  • Local models still need clear prompts and scoped tasks.

Alternatives if you prefer other editors

  • Neovim with local LLM tooling
  • JetBrains IDEs with Continue
  • Other local assistants that support Ollama

Bottom line

If you want control, privacy, and zero vendor lock-in, VSCodium + Continue + Ollama is a strong default. Add cloud models only when you truly need them.

Ready to go further?

This is how we build at Vasilkoff.com: privacy-first, open-source friendly, and practical. We use these tools, we contribute to open source, and we enjoy helping like-minded teams. If you need experienced IT professionals who have seen a few "dinosaurs" and still ship at the cutting edge of AI, reach out via our contact page.

Last updated: