-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Fix Unit 1 dummy_agent_library.ipynb ValueError #495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks @insdout. This change works as expected. However, after running the code a couple of times, I get the following issue. HfHubHTTPError: 402 Client Error: Payment Required for url: https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.3-70B-Instruct (Request ID: Root=1-682887f2-3e7d738459d115754906ebf1;1959db62-e5df-4b7e-9531-0b6180242d07) You have exceeded your monthly included credits for Inference Providers. Subscribe to PRO to get 20x more monthly included credits. |
@ashishsamant2311 I think the issue you've encountering now is related to HF's inference usage limits, not the model compatibility problem that this PR was addressing. The error indicates you've exceeded your monthly quota for the |
Would you mind reviewing this PR when you have a moment? It addresses #487 by explicitly include the Let me know if any further changes are needed! Thanks a lot |
@insdout I had issues using a number of models, even when specifying other providers. I'm not a HF Pro subscriber, so I'm not able to test directly against the
It's not just Llama models - I had similar results with Qwen and Mistral models. Does this work for you using other providers? PS: I'm using the notebook linked from this page - I assume that is the only one. |
@iamcam I encountered the same issue when attempting to use Llama, Qwen and Mistral models with various providers - the error persisted across different configurations. The only solutions that worked for me were:
|
Switching out |
Hi! Just a quick note: the notebooks in this repo are legacy and no longer actively maintained (We'll remove it to avoid confusion). The current source of truth lives in the Hugging Face organization here: https://huggingface.co/agents-course/notebooks. I believe this particular issue has already been fixed there (which is what users see in the course). However, if you find that the issue still exists, we’d be more than happy to review and accept a contribution. Thanks again! 🙏 |
Description
This pull request addresses huggingface/agents-course#487, which reports an issue in Unit 1 ("Dummy Agents Library" section) of the AI Agents course. The provided code attempts to use the
meta-llama/Llama-3.2-3B-Instruct
model for text generation, resulting in aValueError
:This indicates that the selected model is not compatible with the
text-generation
task as expected in the course materials.Fix
The following line:
has been updated to:
This explicitly sets the provider to
hf-inference
and uses a compatible model for the intended task.Outcome
The updated code resolves the model compatibility error and allows the example in Unit 1 to execute successfully.
Linked Issue
Fixes #487