Skip to content

[Bug]: There is no ToolCallResult in the output object of deepseek_qwen2.5_32b stream_events #18742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
RyanOvO opened this issue May 15, 2025 · 4 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@RyanOvO
Copy link

RyanOvO commented May 15, 2025

Bug Description

Source code of the official tool call document:

Image

Image

The output content contains the information of tool invocation:

Image

My analysis is that in the event object of stream_events of llama_index, it does not match the toolcall of DeepSeek_qwen2.5_32b, resulting in the absence of toolcallresult in the output event.

how to solve?

Version

latest

Steps to Reproduce

None

Relevant Logs/Tracbacks

@RyanOvO RyanOvO added bug Something isn't working triage Issue needs to be triaged/prioritized labels May 15, 2025
Copy link

dosubot bot commented May 15, 2025

To resolve the issue of the missing ToolCallResult in the output object of deepseek_qwen2.5_32b stream_events when using llama_index, ensure the following:

  1. Correct Implementation of call_tool Function: Verify that the call_tool function is correctly implemented and invoked. This function is responsible for generating the ToolCallResult and writing it to the event stream. Make sure the function is called with the appropriate ToolCall event and that the ToolCallResult is properly constructed and written to the event stream using ctx.write_event_to_stream(result_ev) [1].

  2. Tool Availability: Ensure that the tool name provided in the ToolCall event matches an available tool in the system. The function checks for tool availability and handles errors accordingly. If the tool is not found, it will result in an error message instead of a valid ToolCallResult [1].

  3. Setup of Tools: Make sure the tools are correctly set up using FunctionTool.from_defaults and that the return_direct attribute is set appropriately for tools that should return results directly. This setup is crucial for the ToolCallResult to be generated and streamed correctly [2].

  4. Handling of Events: The llama_index framework tracks tool calls and their arguments through events such as ResponseFunctionToolCall and ResponseFunctionCallArgumentsDoneEvent. Ensure these events are correctly handled and that the tool call arguments are completed and appended to the list of tool calls [3].

  5. JSON Data Handling: Ensure that the JSON data related to tool calls is complete and correctly formatted. If the JSON is incomplete, use the _repair_incomplete_json function to attempt repairs, and validate the repaired JSON. This step is important to ensure that the ToolCallResult is correctly parsed and included in the output object [4].

By following these steps, the ToolCallResult should be present in the output object.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@logan-markewich
Copy link
Collaborator

The model did not call any tools. Try adjusting the system prompt or tool names/descriptions

@RyanOvO
Copy link
Author

RyanOvO commented May 16, 2025

The model did not call any tools. Try adjusting the system prompt or tool names/descriptions

@logan-markewich thanks, I will try.

However, I found that if tools like TavilyToolSpec and YahooFinanceToolSpec are not used, but rather self-written tool codes, such as add , dangerous_task function, the toolcallresult can be recognized.

Image

I guess there are still some differences when the tool of the remote middleware is called by the LLM

@logan-markewich
Copy link
Collaborator

Yea this really comes down to the prompts/tool names+descriptions and the LLM being used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants