Skip to content

Conversation

abhilash1910
Copy link

@abhilash1910 abhilash1910 commented Apr 19, 2023

@rogerxfeng8 @delock @guoyejun Please review.
UT pass on borealis .

@delock
Copy link
Contributor

delock commented Apr 20, 2023

What is the original caller looks like? This API looks like an XPU specific one and will make model use this function not compatible with other HW than XPU. What is CUDA code and CPU code should look like if this API is implemented for CUDA and CPU?

@abhilash1910
Copy link
Author

The original caller is torch.Generator() which is specific for cpu and for cuda like : torch.cuda.Generator(device=0 ) .[device is optional parameter].
Yes this API is typically for xpu specific ; there is a plan to support in stock pytorch with Jinghui.
This can be referred to here : https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-gpu/blob/72b4b33a87b926748e54f548581796c8544d4548/intel_extension_for_pytorch/xpu/generator.py

@abhilash1910
Copy link
Author

abhilash1910 commented Apr 20, 2023

@delock
Copy link
Contributor

delock commented Apr 21, 2023

Can you post a link to code where this API is called? @abhilash1910

The original caller is torch.Generator() which is specific for cpu and for cuda like : torch.cuda.Generator(device=0 ) .[device is optional parameter]. Yes this API is typically for xpu specific ; there is a plan to support in stock pytorch with Jinghui. This can be referred to here : https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-gpu/blob/72b4b33a87b926748e54f548581796c8544d4548/intel_extension_for_pytorch/xpu/generator.py

@abhilash1910
Copy link
Author

abhilash1910 commented Apr 21, 2023

@delock This is for Genslm : https://jira.devtools.intel.com/browse/PYTORCHDGQ-2390 (links are present there).

@delock
Copy link
Contributor

delock commented Apr 24, 2023

@abhilash1910
Copy link
Author

abhilash1910 commented Apr 24, 2023

Yes @delock for Genslm, this is currently bypassed as to not enable shuffle, but testing on local indicates for enabling shuffle , torch.xpu.Generator() is required. This is through the comment line 148 in link 1 ; I sent the source where torch.Generator() gets called for shuffling ; since torch-gpu will get updated , for now to support in Genslm model code.
Also I think that from xpu perspective, there should be an analogous Generator function .

@abhilash1910 abhilash1910 marked this pull request as draft May 16, 2023 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants