update readme
This commit is contained in:
parent
fc547ee591
commit
b96d84835f
|
@ -351,10 +351,9 @@ To utilize Ascend NPU devices for (distributed) training and inference, you need
|
|||
| torch-npu | 2.2.0 | 2.2.0 |
|
||||
| deepspeed | 0.13.2 | 0.13.2 |
|
||||
|
||||
> [!NOTE]
|
||||
> Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
||||
>
|
||||
> If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
||||
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
||||
|
||||
If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
||||
|
||||
</details>
|
||||
|
||||
|
|
|
@ -351,10 +351,9 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||
| torch-npu | 2.2.0 | 2.2.0 |
|
||||
| deepspeed | 0.13.2 | 0.13.2 |
|
||||
|
||||
> [!NOTE]
|
||||
> 请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
||||
>
|
||||
> 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
||||
请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
||||
|
||||
如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
||||
|
||||
</details>
|
||||
|
||||
|
|
Loading…
Reference in New Issue