Bug: GitHub Copilot is unable to fetch Ollama models #156568
Replies: 7 comments 1 reply
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
It sounds like Copilot may be having trouble communicating with the local Ollama server. A few things to try:
If it still fails, checking the Developer Tools console in VS Code ( Hope this helps narrow it down! |
Beta Was this translation helpful? Give feedback.
-
What if I have my ollama API running in a different computer (same network)? I still can access my ollama API from any other LLM client, it fails only with vscode. Where can I modify those settings? |
Beta Was this translation helpful? Give feedback.
-
I run ollama on the same network using a beefier machine. It would be great if it was possible to configure ollama host address and port settings to be used by GitHub Copilot. Thanks! |
Beta Was this translation helpful? Give feedback.
-
In my case, I am running Ollama locally.
Ollama is installed with homebrew.
I start the service with:
brew services start ollama
ollama run deepseek-r1:70b
From the terminal, ollama is perfectly working.
From VS Code, it fails fetching the models, even if Ollama is
installed locally.
What can I do to troubleshoot this?
…On Tue, Apr 15, 2025 at 7:07 PM Phil Ferriere ***@***.***> wrote:
I run ollama on the same network using a beefier machine. It would be
great if it was possible to configure ollama host address and port settings
to be used by GitHub Copilot. Thanks!
—
Reply to this email directly, view it on GitHub
<#156568 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEN72M7EYZN6Y33AGRHTMYD2ZU4GLAVCNFSM6AAAAAB3BLDQAWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEOBUGQ4DINA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
*Damiano Curia*
Swiss mobile: +41 76 50 40 60 2
Italian mobile: +39 348/360.25.73
skype: curia.damiano
email: ***@***.***
www: http://curia.me
|
Beta Was this translation helpful? Give feedback.
-
I'm having the exact same issue. Failed to fetch models. |
Beta Was this translation helpful? Give feedback.
-
i was having the same problem on linux i was using the package that comes with archlinux but after installing it using the download from their official website it solved the problem |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Bug
Body
I have freshly installed Ollama on my Mac.
GitHub Copilot (latest stable version as of today, with latest stable version of VS Code) finds the Ollama model.
But then:
Beta Was this translation helpful? Give feedback.
All reactions