• Korn Shell / Bash,  ollama,  Uncategorized,  Windows Bash,  WSL

    Upgrading Ollama for Copilot Support: The Step you probably Miss

    Ollama recently added a built-in Copilot integration — but only from a certain version onwards. If your installation is older, you’ll get a confusing error before you even get started. Here’s exactly what happened when I upgraded, including the one gotcha that tripped me up along the way. The Starting Point: An Unsupported Version I wanted to launch Ollama’s new Copilot feature with the kimi-k2.5:cloud model. The command looked straightforward: The error made it clear: my version of Ollama simply didn’t know what copilot was. A quick version check confirmed the problem: Version 0.17.7 — too old for Copilot. Time to upgrade. Running the Upgrade Ollama’s official upgrade method is…

  • AI agents,  ollama

    Which Ollama Models Work with Hermes Agent? A Quick Context Window Check

    If you’ve ever tried to run Hermes Agent only to get a cryptic error about context windows, you’re not alone. Here’s a quick guide to understand what’s happening — and how to find a compatible model in Ollama. The Error When you launch Hermes Agent with an incompatible model, you’ll see something like this: Model deepseek-coder:33b has a context window of 16,384 tokens, which is below the minimum 64,000 required by Hermes Agent. Choose a model with at least 64K context, or set model.context_length in config.yaml to override. Hermes Agent is designed to handle long, multi-step reasoning and tool-use chains. For that to work reliably, it needs a model with…

  • AI Automation,  n8n,  Nvidia CUDA,  ollama,  WSL

    Running Ollama with NVIDIA GPU inside WSL (Ubuntu) – Step-by-Step Guide

    Running large language models locally with GPU acceleration inside WSL2 is not only possible—it’s surprisingly efficient once properly configured. This guide walks through a working setup using Ubuntu, NVIDIA GPU passthrough, and Ollama. 🧩 Target Setup 1. Prepare Windows Host Check Windows Version Ensure you’re on a supported version: Recommended: Enable WSL2 Install NVIDIA Driver (with WSL Support) on your Windows machine Install a current NVIDIA driver that supports WSL CUDA. Verify: If this fails, stop here—GPU passthrough will not work. 2. Prepare Ubuntu (WSL) Start WSL: Update packages: 3. Verify GPU inside WSL Expected: 4. (Optional) Install CUDA Toolkit Verify: 5. Install Ollama ⚠️ Required Dependency for Ollama Before…