diff --git a/README.md b/README.md index a138d646..826512c6 100644 --- a/README.md +++ b/README.md @@ -351,10 +351,9 @@ To utilize Ascend NPU devices for (distributed) training and inference, you need | torch-npu | 2.2.0 | 2.2.0 | | deepspeed | 0.13.2 | 0.13.2 | -> [!NOTE] -> Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use. -> -> If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations. +Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use. + +If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations. diff --git a/README_zh.md b/README_zh.md index a0373711..d41ff13a 100644 --- a/README_zh.md +++ b/README_zh.md @@ -351,10 +351,9 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl | torch-npu | 2.2.0 | 2.2.0 | | deepspeed | 0.13.2 | 0.13.2 | -> [!NOTE] -> 请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。 -> -> 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。 +请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。 + +如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。