-
Notifications
You must be signed in to change notification settings - Fork 19
WA for xpu Generator to support torch API #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
What is the original caller looks like? This API looks like an XPU specific one and will make model use this function not compatible with other HW than XPU. What is CUDA code and CPU code should look like if this API is implemented for CUDA and CPU? |
The original caller is torch.Generator() which is specific for cpu and for cuda like : torch.cuda.Generator(device=0 ) .[device is optional parameter]. |
Can you post a link to code where this API is called? @abhilash1910
|
@delock This is for Genslm : https://jira.devtools.intel.com/browse/PYTORCHDGQ-2390 (links are present there). |
@abhilash1910 I'm not sure whether you refer to the genslm code or pytorch code. I can't see xpu_geenrator in the first link and can only see torch.generator in second link. |
Yes @delock for Genslm, this is currently bypassed as to not enable shuffle, but testing on local indicates for enabling shuffle , torch.xpu.Generator() is required. This is through the comment line 148 in link 1 ; I sent the source where torch.Generator() gets called for shuffling ; since torch-gpu will get updated , for now to support in Genslm model code. |
@rogerxfeng8 @delock @guoyejun Please review.
UT pass on borealis .