Skip to content

Conversation

@antimora
Copy link
Collaborator

@antimora antimora commented Oct 2, 2025

Introduces the cummin (cumulative minimum) operation for both float and int tensors, with implementations for all supported backends (ndarray, tch, candle, cubecl, fusion, router). Updates the tensor API, backend traits, and IR to support cummin, and adds comprehensive tests and documentation. The autodiff backend panics for cummin as proper gradient support is not yet implemented.

Pull Request Template

Checklist

  • Confirmed that cargo run-checks command has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Changes

  • Added cummin(dim: usize) method to the Tensor API for computing cumulative minimum along a dimension
  • Supports both float and int tensor types
  • Documentation includes note about autodiff backend limitation (requires scatter_add)
  • Backend implementations:
    • NdArray: Iterative minimum comparison kernel
    • Tch (LibTorch): Native PyTorch cummin operation
    • Candle: Manual implementation using slice-and-concatenate pattern with broadcast_minimum
    • CubeCL (WGPU): Naive GPU kernel (O(n^2), suitable for small tensors; TODO: parallel scan algorithm)
    • Autodiff: Panics with clear error message (proper gradient requires scatter_add)
    • Fusion: Full operation fusion support for float and int
    • Router: Full operation routing support for float and int
  • Added CumMin operation variant to burn-ir IR layer
  • Updated all backend pattern matching and relative ops

Testing

Added 5 comprehensive tests in burn-tensor/src/tests/ops/cummin.rs

Introduces the cummin (cumulative minimum) operation for both float and int tensors, with implementations for all supported backends (ndarray, tch, candle, cubecl, fusion, router). Updates the tensor API, backend traits, and IR to support cummin, and adds comprehensive tests and documentation. The autodiff backend panics for cummin as proper gradient support is not yet implemented.
@antimora antimora added the feature The feature request label Oct 2, 2025
@codecov
Copy link

codecov bot commented Oct 2, 2025

Codecov Report

❌ Patch coverage is 36.53846% with 165 lines in your changes missing coverage. Please review.
✅ Project coverage is 64.21%. Comparing base (64040b9) to head (002f89f).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
crates/burn-cubecl/src/ops/numeric.rs 0.00% 46 Missing ⚠️
crates/burn-fusion/src/ops/float.rs 0.00% 22 Missing ⚠️
crates/burn-fusion/src/ops/int.rs 0.00% 22 Missing ⚠️
crates/burn-router/src/ops/op_float.rs 0.00% 13 Missing ⚠️
crates/burn-router/src/ops/op_int.rs 0.00% 13 Missing ⚠️
crates/burn-router/src/runner.rs 0.00% 11 Missing ⚠️
crates/burn-fusion/src/stream/context.rs 0.00% 5 Missing ⚠️
crates/burn-tensor/src/tensor/ops/qtensor.rs 0.00% 5 Missing ⚠️
crates/burn-autodiff/src/ops/tensor.rs 0.00% 4 Missing ⚠️
crates/burn-ir/src/operation.rs 0.00% 4 Missing ⚠️
... and 7 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3821      +/-   ##
==========================================
- Coverage   64.27%   64.21%   -0.06%     
==========================================
  Files        1114     1115       +1     
  Lines      133659   133919     +260     
==========================================
+ Hits        85904    85999      +95     
- Misses      47755    47920     +165     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See related comments in cummax / cumprod

Comment on lines +1669 to +1675
// Cummin backward pass requires scatter_add which is not yet implemented
// The gradient should only flow to the first occurrence of each minimum value
panic!(
"Cummin is not supported for autodiff backend. \
Proper implementation requires scatter_add operation."
);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as cummax: tensor.scatter applies the sum reduction (scatter_add equivalent).

We need to improve this expected behavior discrepancy at the tensor API level. select_assign also performs a sum 😅

@antimora
Copy link
Collaborator Author

antimora commented Oct 6, 2025

Merged into #3819

@antimora antimora closed this Oct 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature The feature request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants