-
Couldn't load subscription status.
- Fork 726
Add cummin operation for tensors across backends #3821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Introduces the cummin (cumulative minimum) operation for both float and int tensors, with implementations for all supported backends (ndarray, tch, candle, cubecl, fusion, router). Updates the tensor API, backend traits, and IR to support cummin, and adds comprehensive tests and documentation. The autodiff backend panics for cummin as proper gradient support is not yet implemented.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3821 +/- ##
==========================================
- Coverage 64.27% 64.21% -0.06%
==========================================
Files 1114 1115 +1
Lines 133659 133919 +260
==========================================
+ Hits 85904 85999 +95
- Misses 47755 47920 +165 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See related comments in cummax / cumprod
| // Cummin backward pass requires scatter_add which is not yet implemented | ||
| // The gradient should only flow to the first occurrence of each minimum value | ||
| panic!( | ||
| "Cummin is not supported for autodiff backend. \ | ||
| Proper implementation requires scatter_add operation." | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as cummax: tensor.scatter applies the sum reduction (scatter_add equivalent).
We need to improve this expected behavior discrepancy at the tensor API level. select_assign also performs a sum 😅
|
Merged into #3819 |
Introduces the cummin (cumulative minimum) operation for both float and int tensors, with implementations for all supported backends (ndarray, tch, candle, cubecl, fusion, router). Updates the tensor API, backend traits, and IR to support cummin, and adds comprehensive tests and documentation. The autodiff backend panics for cummin as proper gradient support is not yet implemented.
Pull Request Template
Checklist
cargo run-checkscommand has been executed.Related Issues/PRs
Changes
Testing
Added 5 comprehensive tests in burn-tensor/src/tests/ops/cummin.rs