Skip to content

Fix github action failure and unlock Gemini limit #7971

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

ous50
Copy link

@ous50 ous50 commented May 25, 2025

💻 变更类型 | Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • 📝 docs
  • 🔨 chore

🔀 变更说明 | Description of Change

  • Fix failures for GitHub Action auto update. Users are required to generate a PAT(Personal access token) with Actions , Commit statuses, Contents, Pull requests and Workflows permissions.
  • New feature: modify the gemini harm thredshold to 'OFF' to unleash more possibility of Gemini.

📝 补充信息 | Additional Information

Copy link

vercel bot commented May 25, 2025

@ous50 is attempting to deploy a commit to the LobeHub Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label May 25, 2025
@lobehubbot
Copy link
Member

👍 @ous50

Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.
非常感谢您提出拉取请求并为我们的社区做出贡献,请确保您已经遵循了我们的贡献指南,我们会尽快审查它。
如果您遇到任何问题,请随时与我们联系。

@dosubot dosubot bot added Gemini 🐛 Bug Something isn't working | 缺陷 labels May 25, 2025
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR makes two significant changes to LobeChat that require careful review:

  • 🔒 Modifies Google AI's safety settings by setting HarmBlockThreshold.OFF for all Gemini models, effectively disabling content safety filters which could have security implications
  • 🔑 Updates GitHub Actions sync workflow to use Personal Access Token (PAT) with specific permissions (Actions, Commit statuses, Contents, Pull requests, Workflows) instead of default GITHUB_TOKEN
  • ⚠️ The removal of safety thresholds for Gemini models requires thorough security assessment before merging
  • 🔄 The sync workflow changes require users to manually generate PAT with correct scopes
  • 🛡️ Consider keeping some safety thresholds enabled or adding user warnings about unfiltered content

💡 (1/5) You can manually trigger the bot by mentioning @greptileai in a comment!

2 file(s) reviewed, no comment(s)
Edit PR Review Bot Settings | Greptile

@@ -32,7 +32,8 @@ jobs:
upstream_sync_repo: lobehub/lobe-chat
upstream_sync_branch: main
target_sync_branch: main
target_repo_token: ${{ secrets.GITHUB_TOKEN }} # automatically generated, no need to set
target_repo_token: ${{ secrets.PAT_FOR_SYNC }} # automatically generated, no need to set
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not ideal. please add GITHUB_TOKEN as fallback

// if (modelsOffSafetySettings.has(model)) {
// return 'OFF' as HarmBlockThreshold; // https://discuss.ai.google.dev/t/59352
// }
return HarmBlockThreshold.OFF;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where's the document about this update?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://ai.google.dev/api/generate-content#harmblockthreshold

FYI, the lowest level supported by HARM_CIVIC_INTEGRITY category is BLOCK_NONE when I tested it. This is not mentioned in the documentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://ai.google.dev/api/generate-content#v1beta.HarmCategory

You can use the OFF value, but there is no evidence to show any difference in results between BLOCK_NONE and OFF.
The probability of being truncated by the safety filters now is actually very low, far less stringent than the model's built-in safety measures.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but there is no evidence to show any difference in results between BLOCK_NONE and OFF.

I have tested on Cherry Studio, it works perfectly. At the some time, it also reduces the probability of Gemini poping out unintended words (such as words in Hindi or Korean in a large paragraph of Chinese or Japanese).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please test all models, including older models like Gemma and Gemini 1.5, to avoid potential compatibility issues?

ous50 added 2 commits May 26, 2025 11:28
Add the GITHUB_TOKEN back as a fallback for users to execute.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 Gemini size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants