Skip to content

Conversation

@hameerabbasi
Copy link
Collaborator

No description provided.

def register_binary_nonlinear(op: OpType) -> Callable:
def impl(lhs: ComplexTensor, rhs: ComplexTensor, *args, **kwargs) -> ComplexTensor:
a_r, a_i = split_complex_tensor(lhs)
a_r, a_i = split_complex_arg(lhs)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed as any of lhs and rhs may be a ComplexTensor, but it isn't guaranteed which one.

if alpha is not None:
return impl_with_alpha(lhs, rhs, *args, alpha=alpha, **kwargs)
a_r, a_i = split_complex_tensor(lhs)
a_r, a_i = split_complex_arg(lhs)
Copy link
Collaborator Author

@hameerabbasi hameerabbasi Sep 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as #11 (comment).

@hameerabbasi hameerabbasi changed the title Add gradient checks Add ops required for gradient checks Sep 9, 2025
@hameerabbasi hameerabbasi requested a review from amjames September 9, 2025 07:26
@hameerabbasi
Copy link
Collaborator Author

cc @amjames Ready for review -- each commit can be reviewed separately.

Copy link

@benjaminglass1 benjaminglass1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hameerabbasi Overall this looks good to me! I had a couple comments on de-duplicating some logic, and I didn't spend a lot of time looking at the actual numerical logic of the implemented functions (I'm trusting the tests to catch issues there).

I assume the stuff you need to push upstream includes the functions that are skipping __torch_dispatch__?

@hameerabbasi hameerabbasi merged commit 8698ab8 into openteams-ai:main Sep 15, 2025
3 checks passed
@hameerabbasi hameerabbasi deleted the gradient-checks branch September 15, 2025 08:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants