Skip to content

LLM ignores correct search results in latest version(s) (e.g. Gemma3:27B / QWQ:32B) #734

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
badacoolye opened this issue Apr 7, 2025 · 4 comments

Comments

@badacoolye
Copy link

Hi guys,

Looks like I’m getting strange results in the recent version(s). The search engine is returning correct and relevant results, but they are completely ignored by the LLM – tested with both with Gemma3 27B and QWQ32B. It was not the case in the past.

Let me know if there’s anything I can check or provide to help debug this further.

Thanks!
B.

Image

Image

@bluesonny
Copy link

Yes, me too, setting up the search engine in searxng will give you reasonable results, but setting up the search engine in settings is not valid, and then the connection results returned are not optimal

@Cryztalzone
Copy link

Cryztalzone commented Apr 13, 2025

Same problem here. I asked it a simple question that was answered correctly in the search results: "What is the current date and time?"

Image

Using Ollama I tried debugging the requests: It doesn't receive one. The only requests to the Ollama API are for the search term generation and the follow up suggestions, the results from searxng never show up in any request. Interestingly the summary of a webpage worked just fine, the prompt with the system instructions and website content was sent and processed as expected.

Image

EDIT: @badacoolye Do you have custom system instructions? The issue only appears when I use a longer set of system instructions. Without them, or with just a few words, it works as expected.

EDIT2: It also happens when the prompt is in a language other than English, sometimes even in English, system prompt or not.

@Chris2000SP
Copy link

I have not the same issue but i get no answer from preplexica if i would search the net for some reason. I could get an answer if i give it a link like you did @Cryztalzone . But if searching is required it refuse to generate with ollama and my GPU has no Load but full VRAM.

@Chris2000SP
Copy link

Oh Sorry, i found out that my big models did not get in VRAM and ollama could not generate a answer. If i use a small model the VRAM did not get full and it generates a answer. But llama3.2 with 3b output bad answers with no linking and everything is missing, only text, nothing more. I download a other model now. The gemma3:12b would fit in my 16GB VRAM but with perplexica it did not fit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants