-
Notifications
You must be signed in to change notification settings - Fork 6.2k
🚨FAQs | 常见问题🚨 #4614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
good first issue
Good for newcomers
Comments
Repository owner
locked as too heated and limited conversation to collaborators
Jun 29, 2024
Repository owner
unlocked this conversation
Jun 29, 2024
Repository owner
locked as too heated and limited conversation to collaborators
Jun 29, 2024
1 task
Closed
This was referenced Oct 15, 2024
This was referenced Dec 5, 2024
Closed
1 task
1 task
This was referenced Jan 2, 2025
Closed
1 task
1 task
Repository owner
unlocked this conversation
May 13, 2025
Repository owner
locked and limited conversation to collaborators
May 13, 2025
Repository owner
unlocked this conversation
May 13, 2025
Repository owner
locked and limited conversation to collaborators
May 13, 2025
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Uh oh!
There was an error while loading. Please reload this page.
Note
Please avoid creating issues regarding the following questions, as they might be closed without a response.
请避免创建与下述问题有关的 issues,这些 issues 可能不会被回复。
Tip
Documentation: https://llamafactory.readthedocs.io/en/latest/
中文文档:https://llamafactory.readthedocs.io/zh-cn/latest/
NPU 中文文档:https://ascend.github.io/docs/sources/llamafactory/
中文版入门教程:https://zhuanlan.zhihu.com/p/695287607
Most of problems / 大多数问题
Versions of dependencies conflict / 依赖库版本冲突
Supported models are not found / 无法找到已支持的模型
llamafactory-cli: command not found / 无法找到命令
Please update repository and install again using the following approach.
请按照以下方式更新仓库并重新安装。
Out-of-memory / 显存溢出
The out-of-memory (OOM) error during training is usually due to insufficient VRAM of the current device to complete the computation. You can try the following methods to deal with this issue:
per_device_train_batch_size: 1
cutoff_len: 512
enable_liger_kernel: true
anduse_unsloth_gc: true
quantization_bit: 4
to quantize model parameters (only compatible with LoRA tuning)optim: paged_adamw_8bit
模型训练时显存溢出,通常是由于当前某个设备的剩余显存不足以完成计算任务。可尝试下述方法解决:
per_device_train_batch_size: 1
cutoff_len: 512
enable_liger_kernel: true
和use_unsloth_gc: true
quantization_bit: 4
量化模型参数(仅限于 LoRA 方法)optim: paged_adamw_8bit
Unsatisfying fine-tuning results / 微调效果无法令人满意
Unsatisfying fine-tuning results are usually due to insufficient training samples, leading to underfitting. You can try the following methods to deal with this issue:
num_train_epochs: 5.0
or stepsmax_steps: 1000
learning_rate: 2.0e-4
finetuning_type: freeze
orfinetuning_type: full
微调效果较差,通常是由于训练样本过少,导致模型欠拟合。可尝试下述方法解决:
num_train_epochs: 5.0
或步数max_steps: 1000
learning_rate: 2.0e-4
finetuning_type: freeze
或finetuning_type: full
Corrupted or repeated model responses / 胡乱或重复的模型回答
If this issue occurs before training, it is usually due to using an unaligned (base) model or a mismatched
template
. Please ensure an aligned (instruct/chat) model and correcttemplate
are used.If this issue occurs after training, please check if the
template
used for training and inference is consistent. And do not forget to check if the overfitting appeared. You can try decreasing the number of epochsnum_train_epochs
and learning ratelearning_rate
to deal with the overfitting issue.若该问题发生在训练之前,通常是由于使用了未经对齐(base)的模型或不恰当的模板
template
,请保证使用对齐后(instruct/chat)的模型和正确的模板template
。若该问题发生在训练之后,请检查训练和推理使用的模板
template
是否一致,同时检查是否发生了过拟合。如果发生了过拟合,请减小训练轮数num_train_epochs
和学习率learning_rate
。Training hangs / 训练进程卡住
If distributed training was not enabled, please use the following command to check if the CUDA version of PyTorch is installed correctly:
python -c "import torch; print(torch.cuda.is_available())"
If distributed training was enabled, try setting the environment variable
export NCCL_P2P_LEVEL=NVL
.如果没有使用分布式训练,请使用下述命令检查 CUDA 版本的 PyTorch 是否被正确安装:
python -c "import torch; print(torch.cuda.is_available())"
如果使用了分布式训练,请尝试设置环境变量
export NCCL_P2P_LEVEL=NVL
。LLaMA Board cannot display datasets / LLaMA Board 无法显示数据集
Please ensure that the working directory when launching the LLaMA Board is the same as the LLaMA-Factory directory.
请确保启动 LLaMA Board 时的工作目录与 LLaMA-Factory 主目录一致。
How to shard model weights on multiple devices / 如何模型权重拆分到多个设备上
During the training phase, please refer to the examples about how to use the DeepSpeed ZeRO-3 (recommended) or FSDP.
During the inference phase, please use vLLM to enable the tensor parallelism: examples.
在训练阶段,请参考 examples 使用 DeepSpeed ZeRO-3(推荐)或 FSDP。
在推理阶段,请使用 vLLM 来开启张量并行:examples.
How to use ORPO or SimPO / 如何使用 ORPO 或 SimPO
Modify the
pref_loss
in example script toorpo
orsimpo
.将示例脚本 中的
pref_loss
改为orpo
或simpo
。How to debug with VSCode / 如何用 VSCode 调试程序
See #5337
Why the number of examples is small in pre-training / 为什么预训练样本数比实际的少
We automatically use packing in pre-training, where we concatenate multiple samples into one sequence, so the number of examples displayed is less than the actual number.
我们在预训练时候自动使用了 Packing,将多个样本打包成一条序列,因此显示的样本数量会比实际的少。
Will the training data be shuffled / 训练数据是否会被打乱
LLaMA-Factory will randomly shuffle the training data by default. You can use
disable_shuffling
to turn off the shuffling.LLaMA-Factory 默认会随机打乱训练数据,可使用
disable_shuffling
关闭打乱。How to enable streaming / 如何启用流式数据读取
We recommend shuffling the dataset before training if you want to use streaming.
如果您希望使用流式数据读取,请在训练前手动打乱数据。
Tip
If the problems still exist with the latest code, please create an issue.
若使用最新的代码仍然无法解决问题,请创建一个 issue。
The text was updated successfully, but these errors were encountered: