-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Open
Labels
Description
Outline & Motivation
On PyTorch 2.9 RC, torch.get_float32_matmul_precision()
will cause the warning
UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:80.)
For my use case, it seems to be coming from
if torch.get_float32_matmul_precision() == "highest": # default |
and
if _is_ampere_or_later() and torch.get_float32_matmul_precision() != "highest": |
naively testing torch.set_float32_matmul_precision("high")
in my REPL didn't seem to throw any errors.
I am unsure about the true extent of the changes necessary to account for this fp32 precision API change imposed since PyTorch 2.9 (some notes: https://docs.pytorch.org/docs/2.9/notes/cuda.html) beyond the examples presented above.
Pitch
No response
Additional context
No response