-
Notifications
You must be signed in to change notification settings - Fork 2.3k
System Instructions not work for Ollama #752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
What system instruction are you using? The system instructions are appended in the default prompts and then sent to the model. The model is made aware of the instructions set by the user but it is prompted in such a way that it cannot make too much change in the style of writing of the model (this is to ensure the model gives quality results irrespective of the system instructions set by the user). |
I applied the very simple system instructions as you described.
And this time, I tested it under different conditions to make sure it's a solid experiment..
The first result is using Gemma 3 27b qat(Q4) model. WebSearchAgent result:
WritingAgent result:
The Second result is using mistralai_Mistral-Small-3.1-24B-Instruct-2503-Q6_K model. WebSearchAgent result:
WritingAgent result:
Clearly, websearchagent is not applying the Response format I entered in the system instruction. For a more accurate experiment, let's also experiment with LMSTUDIO.(gemma 3 27b qat model) WebSearchAgent result:
WritingAgent result:
Using LM Studio, you can see that both Agents are respecting the response format and outputting in Korean. This means that the System instructions worked correctly. |
I've just tested them myself and work fine with Ollama too, what model are you using with Ollama? Note: Just telling the LLM to get sources from Wikipedia will not change the sources to Wikipedia because the engines are configured and are constant for each focus mode |
@ItzCrazyKns To give you the test results right away, I deleted all the data from the docker and cloned it directly from github and put it up in docker. Also, I checked it right away by setting up only ollama and LM Studio, and as you can see from the test results, Ollama does not understand the system instructions in the same model. On the other hand, LM Studio understands the system instructions well. I deliberately mentioned referring to wikipedia as the response format, required answers in Korean, and locked the stylistic style to a specific one. When I asked them to describe the response format, I wanted to make sure they understood all of that. And as mentioned above, I'm using quantized models from gemma 3 27b and mistral small 3.1 24b. For reference, I am using a macbook pro(m1 max), I tested ollama on version 0.6.5 and docker on version 4.39.0. |
And now I'm using the following two modifications to force it to understand the system instruction. (Using cursor AI) providers/ollama.ts
metaSearchAgent:
|
System Instructions not work for Ollama WebSearchAgent.
In Ollama, the System instruction only worked with WritingAgent.
I used the Gemma 3 27b QAT model in Ollama. It reads the System instruction fine in LMstudio or gemini, so it doesn't seem to be a problem with other providers.
i'm temporarily working around the issue by using the cursor, but we don't know what issues it will cause, so we'd like to see it officially fixed.
The text was updated successfully, but these errors were encountered: