-
Notifications
You must be signed in to change notification settings - Fork 62
Use CUDA 13.0 on CI #862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use CUDA 13.0 on CI #862
Conversation
.github/workflows/docs.yaml
Outdated
# PR. | ||
python-version: ['3.10'] | ||
cuda-version: ['12.6'] | ||
cuda-version: ['12.8'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by, 12.6 is still supported but the "default" CUDA version supported by pytorch is currently 12.8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its possible that the github runner linux.4xlarge.nvidia.gpu
does not have CUDA 12.8, and thats why this job was erroring. I've reverted it since its not related to this change, but we could leave a TODO to update the CUDA version if its something we need to change eventually.
if sys.platform == "linux": | ||
if args[0].device.type == "cuda": | ||
atol = 2 | ||
atol = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should be able to preserve the previous stricter with CUDA < 13 with something like:
atol = 3 | |
atol = 3 if cuda_version_used_for_building_torch() >= (13, 0) else 2 |
Thanks @Dan-Flores for the fixes! LGTM, let's close this one so you can open a new one |
It seems from https://github.com/pytorch/torchcodec/actions/runs/17402821403/job/49401670135?pr=831 that test-infra stopped support 12.9 and is generating jobs for 13.0 instead, so we need to change our jobs to reflect that.