-
Notifications
You must be signed in to change notification settings - Fork 6
feat: use non-streaming tool calls for streaming tool calls #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: use non-streaming tool calls for streaming tool calls #48
Conversation
Reviewer's GuideThis pull request modifies the OpenAI compatibility layer to disable streaming when tool calls are requested. It updates the payload dictionary within the File-Level Changes
Assessment against linked issues
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
a4310f5
to
4d0f2ca
Compare
@bbrowning I tested this here - how can I generate the Markdown report from the generated JSON? |
This could be made easier, but up until now all providers were in-tree so it made sense to have the report generation in-tree as well. |
56dc2d7
to
9a95420
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @nathan-weinberg - I've reviewed your changes - here's some feedback:
- Consider adding a link to relevant llama.cpp documentation or an issue tracker regarding the lack of streaming tool call support for future reference.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
9a95420
to
9e99639
Compare
Signed-off-by: Nathan Weinberg <[email protected]>
9e99639
to
1c44fe8
Compare
Fixes #47
Summary by Sourcery
Modify OpenAI compatibility layer to force non-streaming mode for tool calls due to llama.cpp limitations
New Features:
Bug Fixes:
Enhancements: