Skip to content

Conversation

PastaPastaPasta
Copy link
Member

Summary

  • Implements extraction of special transaction fields for compact block filters
  • Achieves feature parity between bloom filters and compact block filters for special transactions
  • Enables SPV clients to detect special transactions using either filtering mechanism

What this PR does

This PR adds support for Dash special transactions to the compact block filter implementation (BIP-157/158). Previously, only bloom filters could detect special transaction fields, leaving compact block filter users unable to track masternode-related transactions and platform operations.

Implementation Details

The implementation uses a delegation pattern to extract special transaction fields without creating circular dependencies:

  • ExtractSpecialTxFilterElements() function extracts relevant fields from each special transaction type
  • Fields extracted match exactly those in the bloom filter implementation (CheckSpecialTransactionMatchesAndUpdate)
  • All 5 special transaction types are supported:
    • ProRegTx: collateral outpoint, owner/voting keys, payout script
    • ProUpServTx: ProTx hash, operator payout script
    • ProUpRegTx: ProTx hash, voting key, payout script
    • ProUpRevTx: ProTx hash
    • AssetLockTx: credit output scripts (excluding OP_RETURN)

Testing

Comprehensive test coverage includes:

  • Unit tests: Test all 5 special transaction types with realistic payloads
  • Functional tests: Create actual AssetLockTx transactions with proper signatures
  • Edge cases: Empty scripts, OP_RETURN exclusion, multiple outputs

Test plan

  • Unit tests pass (make check)
  • Functional test passes (test/functional/p2p_blockfilters.py)
  • All special transaction types covered in tests
  • Verified special fields are added to filters (via debug logging)

🤖 Generated with Claude Code

Copy link

github-actions bot commented Aug 19, 2025

⚠️ Potential Merge Conflicts Detected

This PR has potential conflicts with the following open PRs:

Please coordinate with the authors of these PRs to avoid merge conflicts.

Copy link

coderabbitai bot commented Aug 19, 2025

Walkthrough

Adds a new module to extract compact-filter elements from Dash special transactions (evo/specialtx_filter.{h,cpp}) and integrates it into blockfilter construction. Adds a corresponding bloom-filter helper CBloomFilter::CheckSpecialTransactionMatchesAndUpdate for parity. Updates build files to include the new module. Introduces blockfilter index versioning (CURRENT_VERSION = 2) with DB key handling and compatibility checks. Extends unit and functional tests to cover special transactions and AssetLockTxs. Adjusts lint expected circular dependencies and adds a few include/forward-declare tweaks. Adds release notes documenting the breaking index format change.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (8)
src/evo/specialtx_filter.h (1)

21-22: Well-scoped API; consider passing a byte span to avoid copies

The interface is clean and decoupled via forward-declaring CTransaction. Minor optimization: taking a Span/byte-view instead of std::vector would avoid an extra allocation/copy at call sites that already have contiguous bytes.

Apply in this file:

-void ExtractSpecialTxFilterElements(const CTransaction& tx,
-                                   const std::function<void(const std::vector<unsigned char>&)>& addElement);
+void ExtractSpecialTxFilterElements(const CTransaction& tx,
+                                   const std::function<void(Span<const unsigned char>)>& addElement);

And adjust callers to accept Span (see blockfilter.cpp suggestion).

src/blockfilter.cpp (1)

200-203: If adopting a span-based callback, avoid vector reallocation here

If you change the callback to accept a Span, you can construct in-place without creating a temporary std::vector.

-        ExtractSpecialTxFilterElements(*tx, [&elements](const std::vector<unsigned char>& data) {
-            elements.emplace(data.begin(), data.end());
-        });
+        ExtractSpecialTxFilterElements(*tx, [&elements](Span<const unsigned char> data) {
+            elements.emplace(data.begin(), data.end());
+        });
src/evo/specialtx_filter.cpp (2)

63-67: Ensure PROTOCOL_VERSION is declared via an explicit include

CDataStream uses PROTOCOL_VERSION here. Depending on transitive includes can be brittle; include the defining header explicitly.

 #include <streams.h>
+#include <protocol.h> // for PROTOCOL_VERSION

If your tree places PROTOCOL_VERSION in version.h instead, include that instead of protocol.h.


45-50: Minor: Avoid template overreach; restrict AddHashElement to fixed-size hash-like types

Template currently accepts any type with begin()/end(). If misused, it could accept arbitrary containers. Constrain via concepts or overloads for known types (uint256, CKeyID), or add a static_assert on size.

Example using concepts (C++20):

-template<typename T>
-static void AddHashElement(const T& hash,
-                          const std::function<void(const std::vector<unsigned char>&)>& addElement)
+template <typename T>
+concept HashLike = requires(const T& t) {
+    { t.begin() } -> std::input_iterator;
+    { t.end() } -> std::sentinel_for<decltype(t.begin())>;
+} && (sizeof(typename std::iterator_traits<decltype(std::declval<T>().begin())>::value_type) == 1);
+
+template<HashLike T>
+static void AddHashElement(const T& hash,
+                           const std::function<void(const std::vector<unsigned char>&)>& addElement)
 {
     addElement(std::vector<unsigned char>(hash.begin(), hash.end()));
 }

Adjust to your project’s preferred constraints style.

test/functional/p2p_blockfilters.py (4)

308-356: Add UTXO sanity checks and a guard against negative change

The helper is solid. Two small hardening tweaks improve reliability:

  • Ensure there is at least one spendable UTXO before indexing [0].
  • Assert that the computed change is not negative (avoids constructing invalid tx when UTXO value is unexpectedly small).

Apply this diff:

-        # Get a coin to lock
-        unspent = node.listunspent()[0]
+        # Get a coin to lock
+        utxos = node.listunspent()
+        assert utxos, "No spendable UTXOs. Ensure mining to wallet succeeded."
+        unspent = utxos[0]
@@
-        remaining = Decimal(str(unspent['amount'])) - amount_locked - fee
+        remaining = Decimal(str(unspent['amount'])) - amount_locked - fee
+        # Sanity check to avoid creating invalid transactions
+        assert remaining >= 0, f"Insufficient funds in UTXO ({unspent['amount']}) to lock {amount_locked} + fee {fee}"

Optional: consider delegating fee/change selection to fundrawtransaction for robustness, then set nVersion/nType/vExtraPayload post-funding. I can sketch that if you want to go that route.


360-366: Ensure wallet remains loaded after restart (optional hardening)

On some setups, auto-loading the default wallet on restart can differ. A quick assert makes assumptions explicit.

         # Restart node 0 with blockfilterindex enabled
         self.restart_node(0, extra_args=["-blockfilterindex", "-peerblockfilters"])
 
         # The wallet from the first part of the test persists and has funds from blocks 0-999
         # No need to create a new wallet, it already exists with plenty of funds
+        # Ensure the default wallet is loaded after restart
+        assert "" in self.nodes[0].listwallets(), "Default wallet not loaded after restart"

366-369: Rename to reflect the AssetLock scenario and improve readability

This test builds and validates an AssetLockTx, not a ProRegTx. Renaming aligns the name with behavior and avoids confusion.

-        self.test_proregister_tx_filter()
+        self.test_assetlock_tx_single_output_filter()
         self.test_assetlock_tx_filter()
-    def test_proregister_tx_filter(self):
+    def test_assetlock_tx_single_output_filter(self):

Also applies to: 372-372


521-523: Avoid magic numbers for tx type

Using a named constant for the AssetLock tx type would improve readability. If the test framework doesn't expose one, consider adding a local constant (e.g., TX_TYPE_ASSET_LOCK = 8) near the imports.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a6f55ee and 69a9818.

📒 Files selected for processing (6)
  • src/Makefile.am (2 hunks)
  • src/blockfilter.cpp (2 hunks)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • src/evo/specialtx_filter.h (1 hunks)
  • src/test/blockfilter_tests.cpp (2 hunks)
  • test/functional/p2p_blockfilters.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/test/blockfilter_tests.cpp
  • src/evo/specialtx_filter.h
  • src/evo/specialtx_filter.cpp
  • src/blockfilter.cpp
src/{test,wallet/test,qt/test}/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Unit tests for C++ code should be placed in src/test/, src/wallet/test/, or src/qt/test/ and use Boost::Test or Qt 5 for GUI tests

Files:

  • src/test/blockfilter_tests.cpp
src/{masternode,evo}/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Masternode lists must use immutable data structures (Immer library) for thread safety

Files:

  • src/evo/specialtx_filter.h
  • src/evo/specialtx_filter.cpp
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/p2p_blockfilters.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
📚 Learning: 2025-07-20T18:42:49.794Z
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h

Applied to files:

  • src/evo/specialtx_filter.h
  • src/evo/specialtx_filter.cpp
  • src/Makefile.am
  • src/blockfilter.cpp
🧬 Code Graph Analysis (5)
src/test/blockfilter_tests.cpp (2)
src/evo/assetlocktx.h (1)
  • CAssetLockPayload (27-66)
src/index/blockfilterindex.cpp (1)
  • filter (244-244)
src/evo/specialtx_filter.h (1)
src/evo/specialtx_filter.cpp (2)
  • ExtractSpecialTxFilterElements (52-134)
  • ExtractSpecialTxFilterElements (52-53)
src/evo/specialtx_filter.cpp (1)
src/test/blockfilter_tests.cpp (1)
  • stream (120-120)
src/blockfilter.cpp (1)
src/evo/specialtx_filter.cpp (2)
  • ExtractSpecialTxFilterElements (52-134)
  • ExtractSpecialTxFilterElements (52-53)
test/functional/p2p_blockfilters.py (2)
test/functional/test_framework/messages.py (7)
  • CAssetLockTx (1197-1222)
  • hash256 (89-90)
  • msg_getcfcheckpt (2596-2616)
  • msg_getcfheaders (2543-2566)
  • msg_getcfilters (2493-2516)
  • ser_uint256 (132-133)
  • tx_from_hex (230-232)
test/functional/test_framework/script_util.py (1)
  • key_to_p2pkh_script (65-67)
🪛 Ruff (0.12.2)
test/functional/p2p_blockfilters.py

528-528: Local variable txid is assigned to but never used

Remove assignment to unused variable txid

(F841)

🪛 Flake8 (7.2.0)
test/functional/p2p_blockfilters.py

[error] 528-528: local variable 'txid' is assigned to but never used

(F841)

🔇 Additional comments (10)
src/blockfilter.cpp (1)

200-203: Nice, the delegation keeps blockfilter decoupled from Evo internals

Good placement and minimal integration surface. The lambda appends elements without disrupting existing behavior, and set insertion handles deduplication.

src/Makefile.am (2)

203-204: Build system integration looks correct

Adding evo/specialtx_filter.h to BITCOIN_CORE_H and evo/specialtx_filter.cpp to libbitcoin_node_a_SOURCES is sufficient for consumers (blockfilter.cpp) and keeps the dependency only in the node lib.


483-484: No unintended link-time duplication detected

Verified that ExtractSpecialTxFilterElements only appears as:

  • Declaration in src/evo/specialtx_filter.h (line 21)
  • Definition in src/evo/specialtx_filter.cpp (line 52)
  • Invocation in src/blockfilter.cpp (line 201)

No other definitions found—ODR-safe.

src/test/blockfilter_tests.cpp (2)

201-355: Solid coverage of all special tx types; tests validate exact byte materialization

The assertions check precisely the serialized outpoint, key IDs, scripts, and hashes. This gives good confidence that compact filters achieve parity with bloom filter semantics.


357-444: Good edge-case coverage (empty scripts, OP_RETURN exclusion, empty operator payout)

The scenarios match the intended exclusion rules and ensure stability when fields are empty or unspendable.

src/evo/specialtx_filter.cpp (3)

35-42: Skip OP_RETURN only for AssetLock outputs; allow empty checks elsewhere

Helper excludes only empty scripts, which aligns with bloom-filter behavior for ProReg/Up* payload scripts (these shouldn’t be OP_RETURN anyway). Looks correct.


52-134: Behavior matches CBloomFilter::CheckSpecialTransactionMatchesAndUpdate

  • ProReg: collateral outpoint (serialized), owner/voting key IDs, payout script
  • ProUpServ: proTxHash, operator payout script
  • ProUpReg: proTxHash, voting key, payout script
  • ProUpRev: proTxHash
  • AssetLock: credit outputs excluding OP_RETURN

All wrapped with GetTxPayload checks and guarded by HasExtraPayloadField().


110-121: AssetLock credit outputs: correct OP_RETURN exclusion mirrors base filter

The explicit OP_RETURN check is needed here because these outputs are not in vout and would otherwise slip through unfiltered. This matches BasicFilterElements semantics for standard outputs.

test/functional/p2p_blockfilters.py (2)

12-17: Imports for special-tx construction and signing look correct

The added imports align with the new AssetLockTx construction, signing, and script building used later in the tests. Good coverage without obvious redundancy.

Also applies to: 25-25, 29-35, 40-41


74-83: Good: wallet initialization and mining to a known address

Ensuring a wallet exists and mining to an address you control guarantees mature UTXOs for later funding. This keeps the tests deterministic.

…57/158)

Implements extraction of special transaction fields for compact block filters,
achieving feature parity with bloom filters. SPV clients can now detect special
transactions using either filtering mechanism.

Changes:
- Add ExtractSpecialTxFilterElements() to extract fields from special txs
- Integrate special tx extraction into BasicFilterElements()
- Support all special transaction types:
  * ProRegTx: collateral, owner/voting keys, payout script
  * ProUpServTx: ProTx hash, operator payout
  * ProUpRegTx: ProTx hash, voting key, payout script
  * ProUpRevTx: ProTx hash
  * AssetLockTx: credit output scripts
- Add comprehensive unit tests for all transaction types
- Add functional tests with actual AssetLockTx transactions
- Use delegation pattern to avoid circular dependencies

The implementation mirrors CheckSpecialTransactionMatchesAndUpdate() to ensure
consistent behavior between bloom and compact filters.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
@PastaPastaPasta PastaPastaPasta force-pushed the feat/special-tx-compact-filters branch from 69a9818 to 8ae2280 Compare August 20, 2025 02:38
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
test/functional/p2p_blockfilters.py (2)

438-440: Fail fast when tx isn’t found in the mined block

Log-only can hide real regressions. Assert to make failures visible.

Apply this diff:

-        if not found_our_tx:
-            self.log.info(f"ERROR: Could not find our transaction {txid} in block!")
+        assert found_our_tx, f"Could not find AssetLockTx {txid} in block {block_hash}"

525-529: Fix lint (F841), simplify send path, and use txid for an inclusion check

Avoid reconstructing CTransaction just to re-serialize, and ensure txid is used. Also add an assertion that the tx made it into the mined block.

Apply this diff:

-        # Sign and send the transaction
-        signed_tx = node.signrawtransactionwithwallet(lock_tx.serialize().hex())
-        final_tx = tx_from_hex(signed_tx["hex"])
-        txid = node.sendrawtransaction(final_tx.serialize().hex())
+        # Sign and send the transaction
+        signed_hex = node.signrawtransactionwithwallet(lock_tx.serialize().hex())["hex"]
+        txid = node.sendrawtransaction(signed_hex)

Add this assertion right after mining the block (just below where block_hash is assigned):

# Verify our tx is included in the mined block
block = node.getblock(block_hash, 2)
assert any(t['txid'] == txid for t in block['tx']), f"AssetLockTx {txid} not found in block {block_hash}"
🧹 Nitpick comments (3)
test/functional/p2p_blockfilters.py (3)

308-356: AssetLockTx construction: add guardrails and verify OP_RETURN policy

The construction looks sound and preserves vExtraPayload across signing. Two suggestions:

  • Assert you actually have UTXOs before indexing listunspent()[0] to produce clearer failures.
  • Non-zero-value OP_RETURN output: usually allowed by consensus; policy can vary. Confirm regtest policy accepts non-zero OP_RETURN amounts to avoid intermittent mempool rejection.

Example minimal guard:

-        unspent = node.listunspent()[0]
+        utxos = node.listunspent()
+        assert utxos, "No spendable UTXOs available; ensure mining succeeded and maturity is reached"
+        unspent = utxos[0]

372-374: Rename for clarity: this test exercises AssetLockTx, not ProRegTx

Function name test_proregister_tx_filter is misleading given the body intentionally uses AssetLockTx as a proxy. Consider renaming to reflect its purpose (e.g., test_assetlock_tx_simple_filter).


541-547: Reasonable sanity checks on filter presence

Asserting non-empty filter is a pragmatic functional check given there’s no Python-side GCS decoder here. Optionally, you could assert expected growth versus a baseline non-special block, but that’s not strictly necessary.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 69a9818 and 8ae2280.

📒 Files selected for processing (6)
  • src/Makefile.am (2 hunks)
  • src/blockfilter.cpp (2 hunks)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • src/evo/specialtx_filter.h (1 hunks)
  • src/test/blockfilter_tests.cpp (2 hunks)
  • test/functional/p2p_blockfilters.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/evo/specialtx_filter.h
  • src/Makefile.am
  • src/evo/specialtx_filter.cpp
  • src/test/blockfilter_tests.cpp
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/blockfilter.cpp
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/p2p_blockfilters.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
📚 Learning: 2025-07-20T18:42:49.794Z
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h

Applied to files:

  • src/blockfilter.cpp
🪛 Ruff (0.12.2)
test/functional/p2p_blockfilters.py

528-528: Local variable txid is assigned to but never used

Remove assignment to unused variable txid

(F841)

🪛 Flake8 (7.2.0)
test/functional/p2p_blockfilters.py

[error] 528-528: local variable 'txid' is assigned to but never used

(F841)

🔇 Additional comments (5)
src/blockfilter.cpp (2)

10-10: Good include: extracts Evo-specific logic behind a dedicated header

Including evo/specialtx_filter.h here keeps blockfilter.cpp free of Evo details while delegating extraction via a stable API. No layering red flags.


199-203: Correct integration point; verify extractor skips empty/OP_RETURN elements and keeps parity with bloom-filter logic

Hooking ExtractSpecialTxFilterElements right after standard txout script collection is the right place. The lambda correctly appends byte vectors to the ElementSet.

Please double-check:

  • The extractor omits empty elements and OP_RETURN scripts (for AssetLock credit outputs) to match the txout handling above and the bloom-filter behavior.
  • Only the intended special fields are serialized (no accidental inclusion of the whole vExtraPayload).
  • Duplicates are fine (std::set), but ensure canonical encodings to avoid semantically identical duplicates.

If all holds, this change should be non-breaking and deterministic.

test/functional/p2p_blockfilters.py (3)

12-18: Imports look appropriate and minimal

New imports are used and scoped to the new tests (CAssetLockTx, tx_from_hex, script helpers, ECKey, Decimal). No concerns.

Also applies to: 25-25, 29-35, 40-41


74-83: Wallet bootstrap + directed mining is fine

Creating a wallet (if missing) and mining to a fresh address ensures spendable UTXOs later in the run. This avoids brittle assumptions about default wallets.


360-362: Safe restart scope

Restarting node 0 here is OK since subsequent checks don’t rely on earlier P2P interfaces. Just be aware any prior P2PInterface connections to node 0 are invalidated after this restart.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
test/lint/lint-circular-dependencies.py (1)

25-26: Bless and annotate new circular dependencies in test/lint/lint-circular-dependencies.py

These two blockfilter cycles are expected to unblock CI, but please confirm they’re unavoidable and plan a follow-up to break them.

• Confirm whether pulling in llmq/signing (and thus net_processing) via evo/specialtx_filter is truly required for parsing special-tx payloads. You may be able to:
– Parse extensions using only evo/specialtx.h and core serialization types
– Split out minimal assetlock payload structs into a lightweight header (e.g., evo/assetlocktx_types.h) without llmq/signing includes
• If net_processing’s dependency on blockfilter types is the closing edge, consider narrowing or relocating the wire-protocol/blockfilter headers so net_processing doesn’t include blockfilter.cpp.

Annotate inline why these entries are temporary and point to PR #6825:

--- a/test/lint/lint-circular-dependencies.py
+++ b/test/lint/lint-circular-dependencies.py
@@ -25,2 +25,2 @@ EXPECTED_CIRCULAR_DEPENDENCIES = (
-    "blockfilter -> evo/specialtx_filter -> evo/assetlocktx -> llmq/signing -> net_processing -> blockfilter",
-    "blockfilter -> evo/specialtx_filter -> evo/assetlocktx -> llmq/signing -> net_processing -> index/blockfilterindex -> blockfilter",
+    "blockfilter -> evo/specialtx_filter -> evo/assetlocktx -> llmq/signing -> net_processing -> blockfilter",  # PR #6825: special-tx elements in compact filters
+    "blockfilter -> evo/specialtx_filter -> evo/assetlocktx -> llmq/signing -> net_processing -> index/blockfilterindex -> blockfilter",  # PR #6825
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8ae2280 and 773fe9b.

📒 Files selected for processing (1)
  • test/lint/lint-circular-dependencies.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h

PastaPastaPasta and others added 2 commits August 20, 2025 08:31
- Rename misleading test_proregister_tx_filter to test_assetlock_basic
- Remove excessive debug logging throughout
- Add proper filter verification by comparing filter sizes
- Document why only AssetLockTx is tested functionally
- Use control blocks to verify special tx fields increase filter size

The improved tests now properly verify that special transaction fields
are included in filters by comparing sizes rather than relying on logs.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
test/functional/p2p_blockfilters.py (1)

491-506: Fix F841 lint error and assert inclusion of the tx in the mined block

The current code assigns txid but never uses it (Ruff/Flake8 F841). Also, fail fast if the tx is missing from the mined block to avoid false positives. This also simplifies the sign/send flow.

-        # Sign and send the transaction
-        signed_tx = node.signrawtransactionwithwallet(lock_tx.serialize().hex())
-        final_tx = tx_from_hex(signed_tx["hex"])
-        txid = node.sendrawtransaction(final_tx.serialize().hex())
+        # Sign and send the transaction
+        signed_hex = node.signrawtransactionwithwallet(lock_tx.serialize().hex())["hex"]
+        txid = node.sendrawtransaction(signed_hex)
@@
-        # Mine it into a block
+        # Mine it into a block
         block_hash = self.generate(node, 1, sync_fun=self.no_op)[0]
 
-        # Get the filter to verify it was created
+        # Verify our tx is included and has the expected special payload
+        block = node.getblock(block_hash, 2)
+        tx = next((t for t in block["tx"] if t["txid"] == txid), None)
+        assert tx is not None, f"AssetLockTx {txid} not found in block {block_hash}"
+        assert "assetLockTx" in tx, "Mined transaction should have assetLockTx field"
+        assert "creditOutputs" in tx["assetLockTx"], "AssetLockTx should have creditOutputs"
+
+        # Get the filter to verify it was created
         filter_result = node.getblockfilter(block_hash, 'basic')
         assert 'filter' in filter_result, "Block should have a filter"
🧹 Nitpick comments (3)
test/functional/p2p_blockfilters.py (3)

320-349: Rename lockTx_payload to snake_case for consistency

Follow Python naming conventions and keep variable naming consistent across both builders.

-        lockTx_payload = CAssetLockTx(1, credit_outputs)
+        lock_tx_payload = CAssetLockTx(1, credit_outputs)
@@
-        lock_tx.vExtraPayload = lockTx_payload.serialize()
+        lock_tx.vExtraPayload = lock_tx_payload.serialize()
-        lockTx_payload = CAssetLockTx(1, credit_outputs)
+        lock_tx_payload = CAssetLockTx(1, credit_outputs)
@@
-        lock_tx.vExtraPayload = lockTx_payload.serialize()
+        lock_tx.vExtraPayload = lock_tx_payload.serialize()

Also applies to: 467-490


389-411: Strengthen the assertion by checking at least one credit output is present

You already assert presence of assetLockTx and creditOutputs. Consider adding a quick count check to ensure non-empty creditOutputs; keeps the test from passing on empty arrays.

         assert 'assetLockTx' in found_tx, "Mined transaction should have assetLockTx field"
         assert 'creditOutputs' in found_tx['assetLockTx'], "AssetLockTx should have creditOutputs"
+        assert len(found_tx['assetLockTx']['creditOutputs']) >= 1, "Expected at least one credit output"

425-427: Optional: trim verbose logs to keep tests quieter

Functional tests generally avoid info logs unless diagnosing flakiness. Consider removing or downgrading these to debug if not needed.

Also applies to: 511-513

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0ea85f7 and 3fb9183.

📒 Files selected for processing (3)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • src/evo/specialtx_filter.h (1 hunks)
  • test/functional/p2p_blockfilters.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/evo/specialtx_filter.h
  • src/evo/specialtx_filter.cpp
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/p2p_blockfilters.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
🧬 Code Graph Analysis (1)
test/functional/p2p_blockfilters.py (2)
test/functional/test_framework/messages.py (2)
  • CAssetLockTx (1197-1222)
  • tx_from_hex (230-232)
test/functional/test_framework/script_util.py (1)
  • key_to_p2pkh_script (65-67)
🪛 Ruff (0.12.2)
test/functional/p2p_blockfilters.py

494-494: Local variable txid is assigned to but never used

Remove assignment to unused variable txid

(F841)

🪛 Flake8 (7.2.0)
test/functional/p2p_blockfilters.py

[error] 494-494: local variable 'txid' is assigned to but never used

(F841)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: mac-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: arm-linux-build / Build source
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64-build / Build source
  • GitHub Check: linux64_sqlite-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: Lint / Run linters
🔇 Additional comments (3)
test/functional/p2p_blockfilters.py (3)

12-17: Imports are accurate and minimal

The new imports match usage in the added tests and keep the module surface tight.

Also applies to: 25-25, 29-35, 40-41


74-83: Good: ensure spendable coinbase by creating a wallet and mining to an address

This avoids “no wallet” pitfalls and guarantees spendable UTXOs for the AssetLockTx tests.


355-378: Scope rationale is sound; leverages unit tests for other special tx types

Confining functional coverage to AssetLockTx is pragmatic here; unit tests validating ProReg/ProUp* coverage complement this well.

Functional tests updates in p2p_blockfilters.py:

- Simplify sign/send by using signed hex directly; remove unused variables

- Fail fast: assert AssetLockTx is included in mined block and has creditOutputs

- Strengthen assertion: require at least one credit output

- Rename lockTx_payload -> lock_tx_payload for snake_case consistency

- Downgrade verbose info logs to debug to keep tests quieter
Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, one nit + could probably make clang format happy

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
test/functional/p2p_blockfilters.py (2)

355-378: Clear rationale for limiting functional coverage to AssetLockTx

The docstring explains why only AssetLockTx is tested functionally and points to comprehensive unit tests for other types. This addresses the earlier “what about other types?” feedback.


496-504: Good: assert inclusion and payload presence in the mined block

This addresses prior feedback to fail fast if the tx is missing and to actually use txid. The assertions are clear and will surface regressions.

🧹 Nitpick comments (4)
test/functional/p2p_blockfilters.py (4)

308-354: Consider zero-valued OP_RETURN and more robust coin selection (optional)

Two optional improvements:

  • Many policies expect OP_RETURN outputs to carry zero value. If Dash policy enforces this, switch to a zero-valued OP_RETURN and adjust the “remaining” calculation accordingly.
  • Using listunspent()[0] can pick an unexpectedly small UTXO; consider relying on the wallet to select inputs (fundrawtransaction) when feasible.

If policy requires OP_RETURN to be zero-valued, apply this refactor:

-        # Calculate remaining amount (minus small fee)
-        fee = Decimal("0.0001")
-        remaining = Decimal(str(unspent['amount'])) - amount_locked - fee
+        # Calculate remaining amount (minus small fee)
+        fee = Decimal("0.0001")
+        # If OP_RETURN must be zero-valued, don't subtract amount_locked here
+        remaining = Decimal(str(unspent['amount'])) - fee
@@
-        # Add OP_RETURN output with the locked amount
-        lock_tx.vout.append(CTxOut(int(amount_locked * COIN), CScript([OP_RETURN, b""])))
+        # Add OP_RETURN output (zero-valued if required by policy)
+        lock_tx.vout.append(CTxOut(0, CScript([OP_RETURN, b""])))

Caveat: If non-zero OP_RETURN is intentional/required for AssetLock semantics in Dash, keep the current behavior.


389-412: Strengthen validation: assert that creditOutputs script matches the constructed P2PKH

You already assert presence of assetLockTx and creditOutputs. Assert that the credit output script actually matches the pubkey-derived P2PKH to better tie the broadcast object to the mined payload.

         tx_details = node.getrawtransaction(txid, True)
         assert tx_details.get('type') == 8, f"Expected type 8 (AssetLockTx), got {tx_details.get('type')}"
         # Note: vExtraPayload may not appear in RPC output but the transaction is still valid
@@
         assert found_tx is not None, f"Transaction {txid} not found in block"
         assert 'assetLockTx' in found_tx, "Mined transaction should have assetLockTx field"
         assert 'creditOutputs' in found_tx['assetLockTx'], "AssetLockTx should have creditOutputs"
         assert len(found_tx['assetLockTx']['creditOutputs']) >= 1, "Expected at least one credit output"
+        # Verify that the expected P2PKH script is present among creditOutputs
+        expected_script_hex = key_to_p2pkh_script(pubkey).hex()
+        credit_scripts = [co.get('script') for co in found_tx['assetLockTx']['creditOutputs']]
+        assert expected_script_hex in credit_scripts, "Expected credit output script not found in assetLockTx payload"

Note: If the RPC field is named differently (e.g., 'scriptPubKey' vs 'script'), adapt accordingly.


423-431: Size-based filter assertion can be fragile; consider fallback strategy

Comparing hex length is a reasonable heuristic. If this ever flakes in CI, consider a fallback: mine an additional “control” block that includes only an OP_RETURN tx (no special payload) to control for OP_RETURN’s contribution, or add a targeted unit/integration helper to decode and check filter membership of the specific creditOutputs scripts.


435-525: Add script-level assertions for multiple credit outputs (and optional zero-valued OP_RETURN)

  • Mirror the single-output test by asserting both expected scripts appear in the mined AssetLockTx payload.
  • If OP_RETURN should be zero-valued per policy, adjust here as well.

Script presence checks:

         tx = next((t for t in block["tx"] if t["txid"] == txid), None)
         assert tx is not None, f"AssetLockTx {txid} not found in block {block_hash}"
         assert "assetLockTx" in tx, "Mined transaction should have assetLockTx field"
         assert "creditOutputs" in tx["assetLockTx"], "AssetLockTx should have creditOutputs"
+        expected1 = key_to_p2pkh_script(pubkey1).hex()
+        expected2 = key_to_p2pkh_script(pubkey2).hex()
+        scripts = [co.get('script') for co in tx["assetLockTx"]["creditOutputs"]]
+        assert expected1 in scripts and expected2 in scripts, "Not all expected credit output scripts found"

Optional OP_RETURN value change (if required by policy):

-        # Add OP_RETURN output
-        lock_tx.vout.append(CTxOut(int(total_locked * COIN), CScript([OP_RETURN, b""])))
+        # Add OP_RETURN output (zero-valued if required by policy)
+        lock_tx.vout.append(CTxOut(0, CScript([OP_RETURN, b""])))

If non-zero OP_RETURN is intentional for AssetLock semantics, keep as-is.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3fb9183 and 7a33c14.

📒 Files selected for processing (1)
  • test/functional/p2p_blockfilters.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/p2p_blockfilters.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
🧬 Code Graph Analysis (1)
test/functional/p2p_blockfilters.py (2)
test/functional/test_framework/messages.py (4)
  • CAssetLockTx (1197-1222)
  • ser_uint256 (132-133)
  • tx_from_hex (230-232)
  • uint256_from_str (136-137)
test/functional/test_framework/script_util.py (1)
  • key_to_p2pkh_script (65-67)
🔇 Additional comments (5)
test/functional/p2p_blockfilters.py (5)

12-17: Imports for AssetLockTx support look appropriate

The additional message types and helpers are necessary for constructing and inspecting AssetLockTx objects and signed transactions. No apparent redundancies here.

Also applies to: 25-25


29-35: Script imports are correct for OP_RETURN and P2PKH construction

Using CScript and OP_RETURN is appropriate for the AssetLockTx test scaffolding; key_to_p2pkh_script keeps the script building simple.


40-41: Key generation and decimal arithmetic imports are appropriate

ECKey and Decimal usage is sensible here to control precision and construct expected P2PKH scripts.


74-83: Wallet bootstrap and coin funding: LGTM

Creating a wallet, mining to a known address, and generating 999 blocks ensures mature UTXOs are available for later tests. Good setup.


306-307: Good separation: run special-tx filter tests after base protocol checks

Deferring to a dedicated suite for special transactions keeps concerns isolated and avoids interfering with the earlier P2P filter protocol tests.

@UdjinM6
Copy link

UdjinM6 commented Aug 20, 2025

needs

    def skip_test_if_missing_module(self):
        self.skip_if_no_wallet()

to fix https://github.com/dashpay/dash/actions/runs/17100397015/job/48495918707?pr=6825

(or switch to MiniWallet)

Copy link
Collaborator

@knst knst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still reviewing, in progress...

"wallet/wallet -> wallet/walletdb -> wallet/wallet",
"node/coinstats -> validation -> node/coinstats",
# Dash
"blockfilter -> evo/specialtx_filter -> evo/assetlocktx -> llmq/signing -> net_processing -> blockfilter",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see c8ba1ae

This same forward declaration is using in llmq/quorum, maybe it worth to create a new header for it: llmq/hash.h or llmq/io.h or something similar.

Or rename llmq/utils.h to llmq/quorum_utils.h and put BuildSignHash to lllmq/util.h

PastaPastaPasta and others added 3 commits August 20, 2025 10:11
…r extraction in sync

Add maintenance notes in ExtractSpecialTxFilterElements and CBloomFilter::CheckSpecialTransactionMatchesAndUpdate to ensure future changes update both implementations together.
Add skip_test_if_missing_module to call skip_if_no_wallet() to avoid failures when built without wallet.
@knst
Copy link
Collaborator

knst commented Aug 20, 2025

def skip_test_if_missing_module(self):
self.skip_if_no_wallet()

I think there's a way to make it work without skip_if_no_wallet and without MiniWallet; only a blocker is listunspent and I think I may avoid it. I will prepare a fix soon

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (10)
src/evo/assetlocktx.cpp (4)

24-29: Forward-declaring BuildSignHash here is a stopgap; consider a small llmq header to own this API.

This local forward declaration avoids a heavy include, but it’s brittle: any signature drift in llmq::BuildSignHash won’t be caught by the header system. Prefer introducing a lightweight llmq header (e.g., llmq/hash.h) that declares BuildSignHash, and include it here and at the definition site.

Apply this diff locally to reduce dynamic init in this TU while keeping behavior unchanged:

-namespace llmq
-{
-// forward declaration to avoid circular dependency
-uint256 BuildSignHash(Consensus::LLMQType llmqType, const uint256& quorumHash, const uint256& id, const uint256& msgHash);
-} // llmq
+namespace llmq {
+// Forward declaration to avoid circular dependency. Prefer a dedicated header for this API.
+uint256 BuildSignHash(Consensus::LLMQType llmqType, const uint256& quorumHash, const uint256& id, const uint256& msgHash);
+} // namespace llmq

116-118: Avoid dynamic initialization for a TU-local constant.

Use a constexpr string literal (or std::string_view) instead of constructing a std::string at runtime.

-const std::string ASSETUNLOCK_REQUESTID_PREFIX = "plwdtx";
+static constexpr char ASSETUNLOCK_REQUESTID_PREFIX[] = "plwdtx";

191-193: Locking style in if-init is non-obvious with LOCK macro; prefer WITH_LOCK or explicit lock before the condition.

The if (LOCK(::cs_main); ...) pattern relies on macro expansion details and hurts readability. Make the lock scope explicit or use WITH_LOCK.

-    if (LOCK(::cs_main); blockman.LookupBlockIndex(assetUnlockTx.getQuorumHash()) == nullptr) {
+    LOCK(::cs_main);
+    if (blockman.LookupBlockIndex(assetUnlockTx.getQuorumHash()) == nullptr) {
         return state.Invalid(TxValidationResult::TX_CONSENSUS, "bad-assetunlock-quorum-hash");
     }

Alternatively:

if (WITH_LOCK(::cs_main, return blockman.LookupBlockIndex(assetUnlockTx.getQuorumHash()) == nullptr;)) {
    return state.Invalid(...);
}

138-142: Height-window check: clarify boundary conditions in log and error code names.

The check mixes < requestedHeight and >= getHeightToExpiry(). Consider making the error strings precise (“too-early” vs “too-late”) and include both heights in logs to aid triage.

-        LogPrint(BCLog::CREDITPOOL, "Asset unlock tx %d with requested height %d could not be accepted on height: %d\n",
-                index, requestedHeight, pindexTip->nHeight);
-        return state.Invalid(TxValidationResult::TX_CONSENSUS, "bad-assetunlock-too-late");
+        LogPrint(BCLog::CREDITPOOL,
+                 "Asset unlock tx idx=%d requestedHeight=%d tipHeight=%d expiryHeight=%d rejected (early=%d, late=%d)\n",
+                 index, requestedHeight, pindexTip->nHeight, getHeightToExpiry(),
+                 int(pindexTip->nHeight < requestedHeight),
+                 int(pindexTip->nHeight >= getHeightToExpiry()));
+        return state.Invalid(TxValidationResult::TX_CONSENSUS,
+                             pindexTip->nHeight < requestedHeight ? "bad-assetunlock-too-early"
+                                                                  : "bad-assetunlock-too-late");
src/common/bloom.cpp (2)

122-127: Duplication risk: share a single extraction path with block filters.

You already note the maintenance hazard. To remove drift entirely, delegate to ExtractSpecialTxFilterElements(tx, addElement) and check elements against the bloom filter inline. This avoids two sources of truth for which fields are “filterable.”

Example approach within CBloomFilter:

-// ... comment ...
-bool CBloomFilter::CheckSpecialTransactionMatchesAndUpdate(const CTransaction &tx)
+// ... comment ...
+bool CBloomFilter::CheckSpecialTransactionMatchesAndUpdate(const CTransaction &tx)
 {
-    if (!tx.HasExtraPayloadField()) {
-        return false; // it is not a special transaction
-    }
-    switch(tx.nType) {
-    // per-type logic ...
-    }
-    LogPrintf("Unknown special transaction type in Bloom filter check.\n");
-    return false;
+    if (!tx.HasExtraPayloadField()) return false;
+
+    bool matched = false;
+    auto add_and_match = [&](Span<const unsigned char> elem, const std::optional<uint256>& to_update) {
+        if (contains(elem)) {
+            matched = true;
+            if ((nFlags & BLOOM_UPDATE_MASK) == BLOOM_UPDATE_ALL && to_update) {
+                insert(*to_update);
+            }
+        }
+    };
+    // Provide a callback that matches the extraction contract of specialtx_filter.cpp
+    ExtractSpecialTxFilterElements(tx, [&](Span<const unsigned char> elem, const std::optional<uint256>& update_with) {
+        add_and_match(elem, update_with);
+    });
+    return matched;
 }

This requires a small adapter callback signature in specialtx_filter.h to surface an optional “update” element where applicable (e.g., proTxHash).


200-202: Enhance unknown-type log with context.

Log the transaction type to aid debugging new types/features.

-    LogPrintf("Unknown special transaction type in Bloom filter check.\n");
+    LogPrintf("Unknown special transaction type in Bloom filter check (nType=%d).\n", int(tx.nType));
test/functional/p2p_blockfilters.py (3)

77-86: Proactively ensure spendable UTXO availability.

Creating the wallet and mining to a fresh address before the split is correct. One minor hardening: assert that listunspent is non-empty before spending later.

-        # Ensure node 0 has a wallet so block rewards are saved for later use
+        # Ensure node 0 has a wallet so block rewards are saved for later use
         if not self.nodes[0].listwallets():
             self.nodes[0].createwallet(wallet_name="")
@@
-        # Get an address to mine to so we receive the rewards
+        # Get an address to mine to so we receive the rewards
         mining_addr = self.nodes[0].getnewaddress()
@@
-        self.generatetoaddress(self.nodes[0], 999, mining_addr)
+        self.generatetoaddress(self.nodes[0], 999, mining_addr)
+        # Sanity: we should have UTXOs to spend in subsequent tests
+        assert self.nodes[0].listunspent(), "Expected spendable UTXOs after initial mining"

383-437: Block filter validation via size delta is pragmatic; add a direct inclusion assertion where feasible.

Comparing filter sizes is a reasonable proxy without a GCS decoder. Consider also asserting that the mined block contains the expected AssetLockTx structure (already done) and, optionally, logging the filter header to aid debugging if this ever flakes.

No code change required; optional logging:

self.log.debug(f"Filter header: {self.nodes[0].getblockfilter(block_hash, 'basic')['header']}")

358-381: Optional: avoid restart by running special-tx subtests in the existing configuration.

Node 0 was already started with -blockfilterindex and -peerblockfilters. Restarting mid-test drops P2P connections (not used later) and adds wall time. Consider removing the restart unless you need a clean state here.

test/lint/lint-circular-dependencies.py (1)

45-46: Core IO cycle entry changed; ensure it reflects the current graph.

The core_io -> evo/mnhftx -> llmq/signing -> ... path reintroduces a signing include for a different route. Validate this is intentional and necessary; otherwise consider similar decoupling (forward-declare or split header) as was done for assetlocktx.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7a33c14 and a1ecf5d.

📒 Files selected for processing (5)
  • src/common/bloom.cpp (1 hunks)
  • src/evo/assetlocktx.cpp (1 hunks)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • test/functional/p2p_blockfilters.py (3 hunks)
  • test/lint/lint-circular-dependencies.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/evo/specialtx_filter.cpp
🧰 Additional context used
📓 Path-based instructions (3)
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/p2p_blockfilters.py
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/evo/assetlocktx.cpp
  • src/common/bloom.cpp
src/{masternode,evo}/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Masternode lists must use immutable data structures (Immer library) for thread safety

Files:

  • src/evo/assetlocktx.cpp
🧠 Learnings (2)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
📚 Learning: 2025-07-20T18:42:49.794Z
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h

Applied to files:

  • src/evo/assetlocktx.cpp
  • src/common/bloom.cpp
🧬 Code Graph Analysis (1)
test/functional/p2p_blockfilters.py (3)
test/functional/test_framework/messages.py (2)
  • CAssetLockTx (1197-1222)
  • tx_from_hex (230-232)
test/functional/test_framework/script_util.py (1)
  • key_to_p2pkh_script (65-67)
test/functional/test_framework/test_framework.py (1)
  • skip_if_no_wallet (1060-1068)
🔇 Additional comments (9)
src/common/bloom.cpp (5)

133-145: ProRegTx: parity with existing bloom semantics looks good.

Collateral outpoint, owner/voting keyIDs, and payout script are checked; on match, insert tx hash when BLOOM_UPDATE_ALL is set. Matches legacy behavior.


147-158: ProUpServTx update target: confirm BLOOM_UPDATE_ALL inserts proTxHash (not txid) by design.

You insert proTxHash when matching scriptOperatorPayout. That mirrors legacy bloom logic, but it’s subtle. Please confirm unit tests cover this exact update behavior so clients relying on proTxHash discover follow-up updates consistently.

If missing, I can add a unit asserting the filter is updated with proTxHash when the payout script matches.


159-171: ProUpRegTx: parity check.

Matching on voting key or payout updates the filter with the referenced proTxHash when BLOOM_UPDATE_ALL is set. This matches the bloom vs. compact filter alignment intent.


179-191: AssetLockTx: correct handling by matching credit outputs only; OP_RETURN remains excluded.

Iterating credit outputs and feeding each to ProcessTxOut matches the consensus rules (P2PKH-only, OP_RETURN excluded). Good alignment with CheckAssetLockTx validation.


227-241: Non-regression: integration order in IsRelevantAndUpdate is sensible.

Checking the tx hash first, then special tx matching, then standard outputs/inputs preserves existing behavior while augmenting for DIP2+ types.

test/functional/p2p_blockfilters.py (3)

69-71: Wallet gating added; resolves CI flakes when wallet is not built.

This matches reviewer guidance. Thanks for adding it.


311-357: AssetLockTx builder: correct payload shape and amounts; OP_RETURN encoding matches consensus checks.

  • P2PKH credit outputs via key_to_p2pkh_script.
  • Single OP_RETURN with empty data and value equal to the sum of credit outputs.
  • nType=8 and vExtraPayload set.

This mirrors CheckAssetLockTx rules and should pass validation on-chain.


438-528: Multiple credit outputs: good coverage and assertions.

Verifies both RPC decoding and that filters grow with more credit outputs. Nicely complements unit tests for other special tx types.

test/lint/lint-circular-dependencies.py (1)

36-40: Double-check circular dependencies after adding evo/assetlocktx path

The entries in test/lint/lint-circular-dependencies.py (lines 36–40) now reflect the new bloom→assetlocktx→llmq/commitment include path and the removal of llmq/signing. To avoid CI failures:

  • Ensure the Python multiprocess package is installed in your CI/dev environment (or update the script to import the standard multiprocessing module instead).
  • Re-run the check:
    contrib/devtools/circular-dependencies.py $(git ls-files '*.h' '*.cpp') \
      | sed 's/^Circular dependency: //' | sort
    
  • Update lines 36–40 in test/lint/lint-circular-dependencies.py to exactly match that sorted output.

@UdjinM6
Copy link

UdjinM6 commented Aug 20, 2025

Say a node is running with blockfilter enabled already. Do I have to drop <datadir>/indexes/blockfilter folder manually after this PR? We need release notes for this I guess...

Also, pls make clang format happy, there are just a few trivial fixes.

@PastaPastaPasta
Copy link
Member Author

Say a node is running with blockfilter enabled already. Do I have to drop /indexes/blockfilter folder manually after this PR? We need release notes for this I guess

I don't know. @coderabbitai please review this concern.

Copy link

coderabbitai bot commented Aug 20, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

Let me investigate the blockfilter index management to understand if manual deletion is required when filter logic changes.


🏁 Script executed:

#!/bin/bash
# Search for blockfilter index management and versioning
echo "=== Searching for blockfilter index management ==="
rg -n -C5 "blockfilter.*index" --type=cpp
echo ""
echo "=== Looking for blockfilter database versioning ==="
rg -n -C3 "(version|Version).*blockfilter|blockfilter.*(version|Version)" --type=cpp
echo ""
echo "=== Checking for filter invalidation or rebuild logic ==="
rg -n -C3 "(invalidat|rebuild|reindex).*filter|filter.*(invalidat|rebuild|reindex)" --type=cpp

Length of output: 17279


🏁 Script executed:

#!/bin/bash
# Look for blockfilter index database schema/versioning
echo "=== Searching for blockfilter database schema and versioning ==="
rg -n -C5 "DB_.*VERSION|VERSION.*DB|schema|Schema" src/index/blockfilterindex.cpp src/index/blockfilterindex.h
echo ""
echo "=== Looking for BaseIndex versioning (parent class) ==="
fd -t f "baseindex" --exec rg -l "version|Version|schema|Schema" {}
echo ""
echo "=== Examining blockfilter index implementation ==="
cat src/index/blockfilterindex.h
echo ""
echo "=== Checking BaseIndex class for versioning ==="
fd -t f "baseindex.h" --exec cat {}

Length of output: 4902


🏁 Script executed:

#!/bin/bash
# Let's search for BaseIndex class more specifically
echo "=== Searching for BaseIndex files ==="
fd -t f -e h -e cpp base | grep -v test
echo ""
echo "=== Looking at BaseIndex implementation ==="
fd -t f "base.h" src/index/ --exec cat {}
echo ""
echo "=== Looking at BaseIndex source ==="
fd -t f "base.cpp" src/index/ --exec head -50 {}

Length of output: 7644


🏁 Script executed:

#!/bin/bash
# Look for versioning or schema information in index databases
echo "=== Searching for database versioning patterns ==="
rg -n -C3 "DB_.*VERSION|VERSION.*DB|SCHEMA|schema|migrate|Migrate" src/index/ --type=cpp
echo ""
echo "=== Looking for any filter data format changes or compatibility ==="
rg -n -C3 "compatibility|compat|format|Format|change|Change" src/index/blockfilterindex.cpp
echo ""
echo "=== Examining blockfilter index database keys ==="
rg -n -C2 "DB_.*|'[A-Z]'" src/index/blockfilterindex.cpp
echo ""
echo "=== Check if there are any existing mechanisms for index invalidation ==="
rg -n -C3 "wipe|Wipe|clear|Clear|delete|Delete|remove|Remove" src/index/blockfilterindex.cpp src/index/base.cpp

Length of output: 10560


🏁 Script executed:

#!/bin/bash
# Let's look at the constructor and init logic more closely
echo "=== Looking at BlockFilterIndex constructor and f_wipe parameter ==="
rg -n -C10 "BlockFilterIndex.*f_wipe|f_wipe.*BlockFilterIndex" src/index/blockfilterindex.cpp
echo ""
echo "=== Check how indexes handle data changes in Bitcoin Core ==="
echo "Let's look at similar indexes to see if they have versioning:"
rg -n -C5 "version|Version" src/index/txindex.cpp src/index/coinstatsindex.cpp 2>/dev/null || echo "Files not found, checking what index files exist:"
fd -t f "index.cpp" src/index/
echo ""
echo "=== Check for any precedent of manual index deletion requirements ==="
rg -n -C3 "manual.*delet|delet.*manual|drop.*index|index.*drop" --type=cpp src/

Length of output: 1620


🏁 Script executed:

#!/bin/bash
# Check the actual index files that exist and look for versioning
echo "=== Checking txindex for versioning patterns ==="
rg -n -C3 "version|Version|DB_.*VERSION|VERSION.*DB" src/index/txindex.cpp
echo ""
echo "=== Checking coinstatsindex for versioning patterns ==="
rg -n -C3 "version|Version|DB_.*VERSION|VERSION.*DB" src/index/coinstatsindex.cpp
echo ""
echo "=== Look for how Bitcoin Core handles index compatibility changes ==="
rg -n -C5 "compatibility|backward.*compat|incompatible|breaking.*change" --type=cpp src/index/
echo ""
echo "=== Check if there are any release notes or documentation about index changes ==="
fd -t f -e md -e txt -e rst | head -20

Length of output: 1311


🏁 Script executed:

#!/bin/bash
# Let's check the Bitcoin Core approach to breaking changes in indexes
echo "=== Look at release notes for index breaking changes ==="
fd -name "RELEASE_NOTES*" -o -name "release-notes*" | head -5 | xargs cat 2>/dev/null || echo "No release notes found"
echo ""
echo "=== Check doc directory for any index documentation ==="
fd -t f . doc/ | grep -E "(index|blockfilter)" | head -10
echo ""
echo "=== Look at the blockfilter logic changes in this PR ==="
echo "Let's see what exactly changed in blockfilter.cpp:"
git show HEAD:src/blockfilter.cpp | tail -20 2>/dev/null || echo "Git not available, using cat:"
echo "Current version:"
tail -20 src/blockfilter.cpp
echo ""
echo "=== Look for any database migration patterns in the codebase ==="
rg -n -C2 "migrate|migration|upgrade" --type=cpp src/dbwrapper.cpp src/dbwrapper.h 2>/dev/null || echo "No dbwrapper files with migration patterns"

Length of output: 2180


🏁 Script executed:

#!/bin/bash
# Let's look at the specific changes in BasicFilterElements function
echo "=== Look at the BasicFilterElements function that was modified ==="
rg -n -A20 -B5 "BasicFilterElements" src/blockfilter.cpp
echo ""
echo "=== Check if there's any precedent for reindexing when filter logic changes ==="
rg -n -C3 "reindex" --type=cpp --type=md --type=txt . | head -20
echo ""
echo "=== Look for documentation about index compatibility ==="
fd -t f . | rg -i "release|change|upgrade|migration" | head -10 | xargs ls -la 2>/dev/null

Length of output: 4740


🏁 Script executed:

#!/bin/bash
# Let's examine the specific changes this PR makes to understand the impact
echo "=== Examining the key change - addition of ExtractSpecialTxFilterElements ==="
cat src/evo/specialtx_filter.h
echo ""
echo "=== Look at release notes to see how similar changes were handled ==="
echo "Checking existing release notes..."
cat doc/release-note-26194.md 2>/dev/null || echo "File not found"
echo ""
cat doc/release-notes-14707.md 2>/dev/null || echo "File not found"  
echo ""
cat doc/release-notes-22650.md 2>/dev/null || echo "File not found"
echo ""
echo "=== Check if there are any Bitcoin Core precedents for filter changes ==="
# Let's search in Bitcoin Core history or similar documentation
rg -n -i "blockfilter.*chang|chang.*blockfilter|filter.*incompatib|incompatib.*filter" --type=md --type=txt . 2>/dev/null || echo "No filter compatibility documentation found"

Length of output: 3232


@PastaPastaPasta Looking at this concern, yes, existing blockfilter indexes will need to be manually deleted after this PR is merged.

Analysis

This PR modifies the BasicFilterElements() function in src/blockfilter.cpp to include special transaction elements via the new ExtractSpecialTxFilterElements() function. This changes the content of compact block filters for any blocks containing Dash special transactions (ProRegTx, ProUpServTx, ProUpRegTx, ProUpRevTx, AssetLockTx).

The Problem

  1. Filter Content Changes: Existing cached blockfilter indexes contain filters generated without special transaction elements, but after this PR, the same blocks will generate filters with these elements.

  2. No Versioning Mechanism: The blockfilter index implementation has no versioning or compatibility detection system. I found no DB_VERSION keys or migration logic in the code.

  3. SPV Client Impact: SPV clients using old cached filters would miss special transactions, defeating the purpose of this feature.

Evidence from Code Analysis

  • The BlockFilterIndex class in src/index/blockfilterindex.h has no versioning mechanism
  • The base BaseIndex::DB class only stores a DB_BEST_BLOCK locator, not a schema version
  • The constructor has an f_wipe parameter for clearing data, but this is only used during initial setup

Recommendation

Release Notes should include:

### Compact Block Filters (BIP-157/158) - Breaking Change

- **Action Required**: Users running with `-blockfilterindex` enabled must manually delete their existing blockfilter index data before upgrading.
  
- **Steps**: 
  1. Stop the node
  2. Delete the `<datadir>/indexes/blockfilter/` directory
  3. Restart the node (the index will rebuild automatically)

- **Reason**: This release adds support for Dash special transactions to compact block filters. Existing cached filters are incomplete and would cause SPV clients to miss masternode and platform-related transactions.

- **Timeline**: The reindexing process will take time proportional to blockchain size.

This is similar to how Bitcoin Core handles breaking changes to index formats - they typically require manual deletion and rebuilding when the underlying data structure changes.


🧠 Learnings used
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.

*/
// Helper function to add a script to the filter if it's not empty
static void AddScriptElement(const CScript& script,
const std::function<void(const std::vector<unsigned char>&)>& addElement)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it correct, that here is std::vector due to bitcoin's implementation of GCSFilter? Because it seems as there should be Span to avoid memory allocation.

self.log.debug(f"Filter with multiple outputs: {multi_output_filter_size} hex chars")
self.log.debug(f"Control filter: {control_filter_size} hex chars")

assert multi_output_filter_size > control_filter_size, \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: consider using here a helper assert_greater_than here it will print values also

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (8)
src/index/blockfilterindex.h (1)

28-30: Versioning constant added — good, consider minor type/style consistency

Defining a private CURRENT_VERSION is the right approach. For consistency with other modules (e.g., src/instantsend/db.h) and to be explicit on width, consider switching to a fixed-width type and brace-init.

-    static constexpr int CURRENT_VERSION = 2;
+    static constexpr int32_t CURRENT_VERSION{2};
src/index/blockfilterindex.cpp (1)

132-148: Incompatible index detection covers both pre-versioned and older-version cases

The logic to block startup when DB_FILTER_POS exists and version < CURRENT_VERSION is correct. The tailored error messages for version == 0 vs. older-but-versioned DBs are helpful.

Consider adding a single-line hint that the index path is <datadir>/indexes/blockfilter/<filter_name>/ to help operators target the exact directory for manual deletion when multiple filter types are enabled.

doc/release-notes-6825.md (2)

31-36: Specify fenced code block language for the error example

Add a language to the fenced code block to satisfy markdownlint (MD040) and improve rendering.

-```
+```text
 Error: The basic filter index is incompatible with this version of Dash Core. 
 The index format has been updated to include special transaction data. 
 Please restart with -reindex to rebuild the blockfilter index, 
 or manually delete the indexes/blockfilter directory from your datadir.

---

`6-11`: **Tone/grammar polish (optional)**

Minor tightening can improve clarity; for example, “will fail to start with a clear error message” can be rephrased to avoid passive voice. Not a blocker.


If you want, I can push a wording pass across this file.

</blockquote></details>
<details>
<summary>test/functional/p2p_blockfilters.py (4)</summary><blockquote>

`305-346`: **Create/sign AssetLockTx: assert signing success and simplify return path**

Ensure we fail fast on incomplete signatures and avoid unnecessary decode/encode roundtrip.



```diff
         # Sign the transaction
-        tx_hex = lock_tx.serialize().hex()
-        signed_tx = node.signrawtransactionwithkey(tx_hex, self.privkeys)
-        return tx_from_hex(signed_tx["hex"]).serialize().hex()
+        tx_hex = lock_tx.serialize().hex()
+        signed_tx = node.signrawtransactionwithkey(tx_hex, self.privkeys)
+        assert signed_tx.get("complete"), f"Signing AssetLockTx failed: {signed_tx.get('errors')}"
+        return signed_tx["hex"]

341-342: Make OP_RETURN output zero-valued to avoid policy surprises

Non-zero OP_RETURN outputs are consensus-valid but non-standard under typical policy. To keep mempool/policy interactions predictable even if relay policies change, prefer a zero-valued OP_RETURN.

-        lock_tx.vout.append(CTxOut(int((amount_locked_1 + amount_locked_2) * COIN), CScript([OP_RETURN, b""])))
+        lock_tx.vout.append(CTxOut(0, CScript([OP_RETURN, b""])))

420-429: Filter size comparison can be brittle; keep logs as debug and prefer ≥ if flakes appear

Comparing serialized filter hex lengths is a reasonable proxy here, but can be noisy across environments. If flakes surface, consider using greater-or-equal or adding a small retry/mining of an extra empty block to reduce variance.

Happy to help adjust if CI shows intermittent failures.


382-393: Assert special tx type early (good); consider checking vout shape

Validating type == 8 is good. As a follow-up, you could assert there’s exactly one change output and one OP_RETURN in vout to catch accidental construction changes.

I can draft the assertions if desired.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between a1ecf5d and d3f1b9e.

📒 Files selected for processing (8)
  • doc/release-notes-6825.md (1 hunks)
  • src/blockfilter.cpp (2 hunks)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • src/evo/specialtx_filter.h (1 hunks)
  • src/index/blockfilterindex.cpp (3 hunks)
  • src/index/blockfilterindex.h (1 hunks)
  • test/functional/feature_blockfilter_version.py (1 hunks)
  • test/functional/p2p_blockfilters.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/blockfilter.cpp
  • src/evo/specialtx_filter.h
  • src/evo/specialtx_filter.cpp
🧰 Additional context used
📓 Path-based instructions (3)
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/index/blockfilterindex.h
  • src/index/blockfilterindex.cpp
test/functional/**/*.py

📄 CodeRabbit Inference Engine (CLAUDE.md)

Functional tests should be written in Python and placed in test/functional/

Files:

  • test/functional/feature_blockfilter_version.py
  • test/functional/p2p_blockfilters.py
doc/**

📄 CodeRabbit Inference Engine (CLAUDE.md)

Unless specifically prompted, avoid making changes to the doc directory (documentation)

Files:

  • doc/release-notes-6825.md
🧠 Learnings (1)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h
🧬 Code Graph Analysis (4)
src/index/blockfilterindex.h (1)
src/instantsend/db.h (1)
  • CURRENT_VERSION (34-34)
test/functional/feature_blockfilter_version.py (2)
test/functional/test_framework/test_framework.py (1)
  • start_node (646-655)
test/functional/test_framework/util.py (2)
  • assert_equal (69-74)
  • get_datadir_path (420-421)
src/index/blockfilterindex.cpp (2)
src/index/blockfilterindex.h (1)
  • GetName (58-58)
src/instantsend/db.h (1)
  • CURRENT_VERSION (34-34)
test/functional/p2p_blockfilters.py (4)
test/functional/test_framework/messages.py (3)
  • CAssetLockTx (1197-1222)
  • ser_uint256 (132-133)
  • tx_from_hex (230-232)
test/functional/test_framework/script_util.py (1)
  • key_to_p2pkh_script (65-67)
test/functional/test_framework/util.py (1)
  • assert_greater_than (77-79)
test/functional/test_framework/test_node.py (1)
  • get_deterministic_priv_key (189-192)
🪛 Ruff (0.12.2)
test/functional/feature_blockfilter_version.py

8-8: shutil imported but unused

Remove unused import: shutil

(F401)


11-11: test_framework.util.assert_equal imported but unused

Remove unused import: test_framework.util.assert_equal

(F401)

🪛 Flake8 (7.2.0)
test/functional/feature_blockfilter_version.py

[error] 8-8: 'shutil' imported but unused

(F401)


[error] 11-11: 'test_framework.util.assert_equal' imported but unused

(F401)

🪛 LanguageTool
doc/release-notes-6825.md

[grammar] ~6-~6: There might be a mistake here.
Context: ...existing blockfilter indexes. ## Impact - Nodes with existing -blockfilterindex ...

(QB_NEW_EN)


[grammar] ~21-~21: There might be a mistake here.
Context: ...st. ### Option 2: Manual Index Deletion 1. Stop your node 2. Delete the blockfilter...

(QB_NEW_EN)


[grammar] ~29-~29: There might be a mistake here.
Context: ... rebuild automatically ## Error Message If you attempt to start a node with an o...

(QB_NEW_EN)


[grammar] ~38-~38: There might be a mistake here.
Context: ... your datadir. ``` ## Technical Details - The blockfilter index now includes field...

(QB_NEW_EN)


[grammar] ~39-~39: There might be a mistake here.
Context: ...s fields from Dash special transactions: - ProRegTx (masternode registration) - P...

(QB_NEW_EN)


[grammar] ~40-~40: There might be a mistake here.
Context: ...: - ProRegTx (masternode registration) - ProUpServTx (masternode service updates)...

(QB_NEW_EN)


[grammar] ~41-~41: There might be a mistake here.
Context: ...ProUpServTx (masternode service updates) - ProUpRegTx (masternode operator updates)...

(QB_NEW_EN)


[grammar] ~42-~42: There might be a mistake here.
Context: ...ProUpRegTx (masternode operator updates) - ProUpRevTx (masternode revocation) - A...

(QB_NEW_EN)


[grammar] ~43-~43: There might be a mistake here.
Context: ...) - ProUpRevTx (masternode revocation) - AssetLockTx (platform credit outputs) - ...

(QB_NEW_EN)


[grammar] ~44-~44: There might be a mistake here.
Context: ... - AssetLockTx (platform credit outputs) - A versioning system has been added to de...

(QB_NEW_EN)


[grammar] ~48-~48: There might be a mistake here.
Context: ... 2 (previously unversioned) ## Benefits - SPV clients can now detect and track Das...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
doc/release-notes-6825.md

31-31: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: linux64_sqlite-build / Build source
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: arm-linux-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: mac-build / Build source
🔇 Additional comments (4)
src/index/blockfilterindex.cpp (3)

33-33: DB key for versioning is well-scoped

Introducing DB_VERSION{'V'} alongside the other single-byte keys is clean and non-conflicting.


120-131: Forward/future-version guard is correct and defensive

The startup check rejects databases with a stored version higher than the code’s CURRENT_VERSION. This prevents running with unknown, newer formats.


169-173: First-commit write of DB_VERSION is atomic with other state

Writing DB_VERSION on the first commit via the same CDBBatch ensures atomicity with other index metadata. Good.

test/functional/p2p_blockfilters.py (1)

448-459: Multiple outputs test is clear and direct

Good coverage that all credit outputs are recognized and exposed in verbose block JSON.

Comment on lines 8 to 11
import shutil

from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal, get_datadir_path
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix lint errors: remove unused imports

shutil and assert_equal are unused, causing Ruff/Flake8 failures (F401). Remove them or use them.

-import os
-import shutil
+import os
@@
-from test_framework.test_framework import BitcoinTestFramework
-from test_framework.util import assert_equal, get_datadir_path
+from test_framework.test_framework import BitcoinTestFramework
+from test_framework.util import get_datadir_path

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.12.2)

8-8: shutil imported but unused

Remove unused import: shutil

(F401)


11-11: test_framework.util.assert_equal imported but unused

Remove unused import: test_framework.util.assert_equal

(F401)

🪛 Flake8 (7.2.0)

[error] 8-8: 'shutil' imported but unused

(F401)


[error] 11-11: 'test_framework.util.assert_equal' imported but unused

(F401)

🤖 Prompt for AI Agents
In test/functional/feature_blockfilter_version.py around lines 8 to 11, the
imports include unused names (shutil and assert_equal) causing lint (F401)
failures; remove these unused imports from the file (delete shutil and
assert_equal from the import list and/or the corresponding import statements) so
only required modules are imported, or alternatively use them if they were
intended — most straightforward fix: remove shutil and assert_equal from the
imports.

Comment on lines 38 to 59
# Simulate an old index by removing the version marker
# We'll manipulate the DB to remove the version key
self.log.info("Simulating old index format by removing version marker...")

# Note: In practice, we'd need to use a DB tool to remove the version key
# For now, we'll just verify that a node with an index created before
# the version was added will fail to start

# For this test, we'll create a situation where the index exists
# but without version info, which simulates an old index
blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")

# Check that the index directory exists
assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"

# Now restart and it should work (since we just created it with the new version)
self.log.info("Restarting node with existing compatible index...")
self.start_node(0)

# Verify it still works
filter_result = node.getblockfilter(blockhash, 'basic')
assert 'filter' in filter_result
Copy link

@coderabbitai coderabbitai bot Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Test doesn’t actually simulate an “old” index; make it validate rebuild path instead

As written, the test does not remove the DB version key or otherwise create an incompatible index; it simply restarts the node, so it doesn’t exercise the new version checks. If manipulating LevelDB keys isn’t feasible in functional tests, convert this to validate the documented recovery path: delete the blockfilter index directory and verify the index rebuilds and remains usable.

-        # Simulate an old index by removing the version marker
-        # We'll manipulate the DB to remove the version key
-        self.log.info("Simulating old index format by removing version marker...")
-        
-        # Note: In practice, we'd need to use a DB tool to remove the version key
-        # For now, we'll just verify that a node with an index created before
-        # the version was added will fail to start
-        
-        # For this test, we'll create a situation where the index exists
-        # but without version info, which simulates an old index
-        blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")
-        
-        # Check that the index directory exists
-        assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"
-        
-        # Now restart and it should work (since we just created it with the new version)
-        self.log.info("Restarting node with existing compatible index...")
-        self.start_node(0)
-        
-        # Verify it still works
-        filter_result = node.getblockfilter(blockhash, 'basic')
-        assert 'filter' in filter_result
+        # Simulate operator action from release notes: delete the blockfilter index
+        blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")
+        assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"
+
+        self.log.info("Deleting blockfilter index directory to simulate manual recovery...")
+        import shutil
+        shutil.rmtree(blockfilter_path)
+
+        self.log.info("Restarting node to rebuild the blockfilter index...")
+        self.start_node(0, extra_args=["-blockfilterindex"])
+
+        # Wait for the index to be rebuilt enough to serve the target filter
+        def filter_available():
+            try:
+                res = self.nodes[0].getblockfilter(blockhash, 'basic')
+                return isinstance(res, dict) and 'filter' in res
+            except Exception:
+                return False
+        self.wait_until(filter_available, timeout=60)

If you want a true “version-too-old” failure path covered, I can draft a small C++ unit test that writes a DB without the version key (or with an older version) and asserts Init() fails with the expected message.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Simulate an old index by removing the version marker
# We'll manipulate the DB to remove the version key
self.log.info("Simulating old index format by removing version marker...")
# Note: In practice, we'd need to use a DB tool to remove the version key
# For now, we'll just verify that a node with an index created before
# the version was added will fail to start
# For this test, we'll create a situation where the index exists
# but without version info, which simulates an old index
blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")
# Check that the index directory exists
assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"
# Now restart and it should work (since we just created it with the new version)
self.log.info("Restarting node with existing compatible index...")
self.start_node(0)
# Verify it still works
filter_result = node.getblockfilter(blockhash, 'basic')
assert 'filter' in filter_result
# Simulate operator action from release notes: delete the blockfilter index
blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")
assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"
self.log.info("Deleting blockfilter index directory to simulate manual recovery...")
import shutil
shutil.rmtree(blockfilter_path)
self.log.info("Restarting node to rebuild the blockfilter index...")
self.start_node(0, extra_args=["-blockfilterindex"])
# Wait for the index to be rebuilt enough to serve the target filter
def filter_available():
try:
res = self.nodes[0].getblockfilter(blockhash, 'basic')
return isinstance(res, dict) and 'filter' in res
except Exception:
return False
self.wait_until(filter_available, timeout=60)
🤖 Prompt for AI Agents
In test/functional/feature_blockfilter_version.py around lines 38-59, the test
currently does not simulate an “old” index (it only restarts the node) so it
never exercises version/rebuild behavior; replace that section to validate the
documented recovery path by stopping the node, removing the entire blockfilter
index directory (datadir/regtest/indexes/blockfilter), then restart the node and
assert the index is rebuilt and usable (wait for the node to come back, call
getblockfilter for the known blockhash and assert it returns a filter, and
assert the index directory (and version marker) has been recreated). If removal
of LevelDB keys is feasible here instead, alternatively stop the node, remove
the version key from the index DB, restart and assert Init() fails or triggers
rebuild per the desired test variant.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this whole test doesn't actually test anything, should be fixed or removed

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@UdjinM6 You're absolutely right - the current test is essentially a no-op that doesn't validate any of the new versioning behavior. Let me provide a proper implementation that actually tests the documented recovery path:

#!/usr/bin/env python3
# Copyright (c) 2025 The Dash Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test blockfilter index version checking and recovery."""

import os
import shutil

from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal, get_datadir_path


class BlockFilterVersionTest(BitcoinTestFramework):
    def set_test_params(self):
        self.setup_clean_chain = True
        self.num_nodes = 1
        self.extra_args = [["-blockfilterindex"]]

    def run_test(self):
        self.log.info("Testing blockfilter index recovery procedure...")
        
        node = self.nodes[0]
        datadir = get_datadir_path(self.options.tmpdir, 0)
        
        # Mine some blocks to create an index
        self.log.info("Mining blocks to create initial index...")
        self.generate(node, 10)
        
        # Verify filter index is working and get a reference filter
        blockhash = node.getblockhash(5)
        original_filter = node.getblockfilter(blockhash, 'basic')
        assert 'filter' in original_filter
        
        self.log.info("Stopping node...")
        self.stop_node(0)
        
        # Test the documented recovery procedure: delete the blockfilter index directory
        blockfilter_path = os.path.join(datadir, "regtest", "indexes", "blockfilter")
        assert os.path.exists(blockfilter_path), "Blockfilter index directory should exist"
        
        self.log.info("Deleting blockfilter index directory to simulate manual recovery...")
        shutil.rmtree(blockfilter_path)
        assert not os.path.exists(blockfilter_path), "Blockfilter index directory should be deleted"
        
        # Restart node - it should rebuild the index automatically
        self.log.info("Restarting node to trigger index rebuild...")
        self.start_node(0)
        
        # Wait for the index to rebuild and become available
        self.log.info("Waiting for index rebuild to complete...")
        def filter_available():
            try:
                result = node.getblockfilter(blockhash, 'basic')
                return isinstance(result, dict) and 'filter' in result
            except Exception:
                return False
        
        self.wait_until(filter_available, timeout=60)
        
        # Verify the rebuilt index works and produces the same filter
        rebuilt_filter = node.getblockfilter(blockhash, 'basic')
        assert_equal(rebuilt_filter['filter'], original_filter['filter'], 
                    "Rebuilt filter should match original")
        
        # Verify the index directory was recreated
        assert os.path.exists(blockfilter_path), "Blockfilter index directory should be recreated"
        
        self.log.info("✅ Blockfilter index recovery procedure validated successfully")


if __name__ == '__main__':
    BlockFilterVersionTest().main()

This version actually tests the documented recovery procedure from the release notes:

  1. Creates a working blockfilter index
  2. Deletes the index directory (simulating the manual recovery step)
  3. Restarts the node and verifies it rebuilds the index automatically
  4. Confirms the rebuilt index works correctly and produces identical filters

This provides genuine value by validating the user experience described in the release notes.


🧠 Learnings used
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.

@UdjinM6
Copy link

UdjinM6 commented Aug 20, 2025

Reindexing to fix blockfilter index is a huge overkill imo, pls consider f28bf6d

@PastaPastaPasta
Copy link
Member Author

@UdjinM6 I'll be away from the computer for a little bit. Feel free to push your commit and push your commit that fixes the version test.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
doc/release-notes-6825.md (1)

20-20: Remove stray “20” at EOF.

Looks like an artifact from annotation; keep the file ending clean.

-20
+
src/evo/assetlocktx.cpp (1)

220-221: Fix ToString formatting: wrong specifiers, missing closing parenthesis

The format string uses %d for a hex string and misses the closing “)”. Also prefer %u for unsigned fields and pass strings directly.

Apply:

-    return strprintf("CAssetUnlockPayload(nVersion=%d,index=%d,fee=%d.%08d,requestedHeight=%d,quorumHash=%d,quorumSig=%s",
-            nVersion, index, fee / COIN, fee % COIN, requestedHeight, quorumHash.GetHex(), quorumSig.ToString().c_str());
+    return strprintf("CAssetUnlockPayload(nVersion=%d,index=%u,fee=%d.%08d,requestedHeight=%u,quorumHash=%s,quorumSig=%s)",
+            nVersion, index, fee / COIN, fee % COIN, requestedHeight, quorumHash.ToString(), quorumSig.ToString());
🧹 Nitpick comments (7)
doc/release-notes-6825.md (3)

4-4: Fix terminology and clarity in Summary.

Use consistent “block filter” terminology (two words) and tighten wording.

-The compact block filter index format has been updated to include Dash special transaction data, providing feature parity with bloom filters for SPV client support. This change is **incompatible** with existing blockfilter indexes.
+The compact block filter index format has been updated to include Dash special-transaction data, providing feature parity with bloom filters for SPV client support. This change is **incompatible** with existing block filter indexes.

7-7: Consistent naming: “block filter index” vs “blockfilter index.”

Align with the heading and the rest of the document.

-- The blockfilter index now includes fields from Dash special transactions:
+- The block filter index now includes fields from Dash special transactions:

16-19: Optional: crisp phrasing of benefits.

Tiny wording polish; no behavior change.

- - SPV clients can now detect and track Dash-specific transactions
- - Feature parity between bloom filters and compact block filters
- - Protection against serving incorrect filter data to light clients
+ - SPV clients can detect and track Dash-specific transactions.
+ - Feature parity between bloom filters and compact block filters.
+ - Reduced risk of serving incorrect/partial filter data to light clients.
src/evo/assetlocktx.cpp (4)

100-109: Make CAssetLockPayload::ToString robust for empty creditOutputs

As written, when creditOutputs is empty, outputs becomes “]” instead of “[]”. While CheckAssetLockTx forbids empty creditOutputs, ToString should be robust.

Apply:

 std::string CAssetLockPayload::ToString() const
 {
-    std::string outputs{"["};
+    std::string outputs{"["};
+    if (creditOutputs.empty()) {
+        outputs.push_back(']');
+        return strprintf("CAssetLockPayload(nVersion=%d,creditOutputs=%s)", nVersion, outputs.c_str());
+    }
     for (const CTxOut& tx: creditOutputs) {
         outputs.append(tx.ToString());
         outputs.append(",");
     }
     outputs.back() = ']';
     return strprintf("CAssetLockPayload(nVersion=%d,creditOutputs=%s)", nVersion, outputs.c_str());
 }

138-141: Use unsigned format specifiers for heights/indices in LogPrint

index and requestedHeight are non-negative domain values; use %u for clarity and to avoid sign-mismatch warnings.

-        LogPrint(BCLog::CREDITPOOL, "Asset unlock tx %d with requested height %d could not be accepted on height: %d\n",
-                index, requestedHeight, pindexTip->nHeight);
+        LogPrint(BCLog::CREDITPOOL, "Asset unlock tx %u with requested height %u could not be accepted at height: %d\n",
+                index, requestedHeight, pindexTip->nHeight);

115-115: Prefer constexpr for request-id prefix and avoid heap/static-init of std::string

A constexpr buffer avoids dynamic initialization at startup and makes intent clear. If you adopt this, ensure the call site constructs a std::string for serialization.

-const std::string ASSETUNLOCK_REQUESTID_PREFIX = "plwdtx";
+constexpr char ASSETUNLOCK_REQUESTID_PREFIX[] = "plwdtx";

And adjust usage:

-    const uint256 requestId = ::SerializeHash(std::make_pair(ASSETUNLOCK_REQUESTID_PREFIX, index));
+    const uint256 requestId = ::SerializeHash(std::make_pair(std::string(ASSETUNLOCK_REQUESTID_PREFIX), index));

Also applies to: 150-151


8-10: Avoid fragile forward declaration of BuildSignHash; include the canonical header

The standalone forward declaration in src/evo/assetlocktx.cpp is brittle: if the signature of llmq::BuildSignHash ever changes, this file will continue to compile but invoke the wrong overload (ODR/signature drift risk). Instead of maintaining a separate forward declaration, include the existing declaration from llmq/signing.h, which is the authoritative source.

Places to update:

  • src/evo/assetlocktx.cpp, lines 8–10: add the header
  • src/evo/assetlocktx.cpp, lines 24–27: remove the forward declaration

Proposed diff:

--- a/src/evo/assetlocktx.cpp
+++ b/src/evo/assetlocktx.cpp
@@ -8,6 +8,7 @@
 #include <llmq/commitment.h>
 #include <llmq/quorums.h>
+#include <llmq/signing.h>      // for llmq::BuildSignHash

 // other includes…

 namespace llmq {
@@ -24,4 +25,3 @@
     const uint256 requestId = ::SerializeHash(std::make_pair(ASSETUNLOCK_REQUESTID_PREFIX, index));
 
-// forward declaration to avoid circular dependency
-uint256 BuildSignHash(Consensus::LLMQType llmqType, const uint256& quorumHash, const uint256& id, const uint256& msgHash);
 } // namespace llmq

This change ensures you’re always using the up-to-date signature without introducing new include cycles.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d3f1b9e and b2522a2.

📒 Files selected for processing (5)
  • doc/release-notes-6825.md (1 hunks)
  • src/evo/assetlocktx.cpp (1 hunks)
  • src/evo/specialtx_filter.cpp (1 hunks)
  • src/evo/specialtx_filter.h (1 hunks)
  • src/index/blockfilterindex.cpp (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/evo/specialtx_filter.cpp
  • src/evo/specialtx_filter.h
  • src/index/blockfilterindex.cpp
🧰 Additional context used
📓 Path-based instructions (3)
doc/**

📄 CodeRabbit Inference Engine (CLAUDE.md)

Unless specifically prompted, avoid making changes to the doc directory (documentation)

Files:

  • doc/release-notes-6825.md
src/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/**/*.{cpp,h,cc,cxx,hpp}: Dash Core C++ codebase must be written in C++20 and require at least Clang 16 or GCC 11.1
Dash uses unordered_lru_cache for efficient caching with LRU eviction

Files:

  • src/evo/assetlocktx.cpp
src/{masternode,evo}/**/*.{cpp,h,cc,cxx,hpp}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Masternode lists must use immutable data structures (Immer library) for thread safety

Files:

  • src/evo/assetlocktx.cpp
🧠 Learnings (2)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
📚 Learning: 2025-07-20T18:42:49.794Z
Learnt from: CR
PR: dashpay/dash#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-20T18:42:49.794Z
Learning: Applies to src/evo/specialtx.h : Special transactions use payload extensions defined in src/evo/specialtx.h

Applied to files:

  • src/evo/assetlocktx.cpp
🧬 Code Graph Analysis (1)
src/evo/assetlocktx.cpp (2)
src/evo/mnhftx.h (1)
  • quorumHash (35-35)
src/evo/assetlocktx.h (1)
  • quorumHash (81-81)
🪛 LanguageTool
doc/release-notes-6825.md

[grammar] ~6-~6: There might be a mistake here.
Context: ...ockfilter indexes. ## Technical Details - The blockfilter index now includes field...

(QB_NEW_EN)


[grammar] ~7-~7: There might be a mistake here.
Context: ...s fields from Dash special transactions: - ProRegTx (masternode registration) - P...

(QB_NEW_EN)


[grammar] ~8-~8: There might be a mistake here.
Context: ...: - ProRegTx (masternode registration) - ProUpServTx (masternode service updates)...

(QB_NEW_EN)


[grammar] ~9-~9: There might be a mistake here.
Context: ...ProUpServTx (masternode service updates) - ProUpRegTx (masternode operator updates)...

(QB_NEW_EN)


[grammar] ~10-~10: There might be a mistake here.
Context: ...ProUpRegTx (masternode operator updates) - ProUpRevTx (masternode revocation) - A...

(QB_NEW_EN)


[grammar] ~11-~11: There might be a mistake here.
Context: ...) - ProUpRevTx (masternode revocation) - AssetLockTx (platform credit outputs) - ...

(QB_NEW_EN)


[grammar] ~12-~12: There might be a mistake here.
Context: ... - AssetLockTx (platform credit outputs) - A versioning system has been added to de...

(QB_NEW_EN)


[grammar] ~16-~16: There might be a mistake here.
Context: ... 2 (previously unversioned) ## Benefits - SPV clients can now detect and track Das...

(QB_NEW_EN)


[grammar] ~18-~18: There might be a mistake here.
Context: ... bloom filters and compact block filters - Protection against serving incorrect fil...

(QB_NEW_EN)


[grammar] ~19-~19: There might be a mistake here.
Context: ...g incorrect filter data to light clients

(QB_NEW_EN)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source
  • GitHub Check: x86_64-w64-mingw32 / Build depends
  • GitHub Check: x86_64-apple-darwin / Build depends
  • GitHub Check: Lint / Run linters
  • GitHub Check: x86_64-pc-linux-gnu_multiprocess / Build depends
  • GitHub Check: arm-linux-gnueabihf / Build depends
🔇 Additional comments (1)
doc/release-notes-6825.md (1)

15-16: I’ve queued a script to inspect the BaseIndex::DB declaration in src/index/base.h. This will confirm exactly what “wipe” behavior is and whether manual directory deletion is required.

Comment on lines +7 to +14
- The blockfilter index now includes fields from Dash special transactions:
- ProRegTx (masternode registration)
- ProUpServTx (masternode service updates)
- ProUpRegTx (masternode operator updates)
- ProUpRevTx (masternode revocation)
- AssetLockTx (platform credit outputs)
- A versioning system has been added to detect incompatible indexes on startup
- The index version is now 2 (previously unversioned)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Document exactly which elements are added (and OP_RETURN exclusion).

Release notes should state which special-transaction fields are fed into filters, matching bloom-filter parity. Add a concise bullet block.

Please verify the exact field list matches the implementation in evo/specialtx_filter.{h,cpp}.

   - AssetLockTx (platform credit outputs)
 - A versioning system has been added to detect incompatible indexes on startup
 - The index version is now 2 (previously unversioned)
+
+  Extracted elements (parity with bloom-filter handling):
+  - ProRegTx: collateral outpoint, owner key, voting key, payout script
+  - ProUpServTx: ProTx hash, operator payout script
+  - ProUpRegTx: ProTx hash, voting key, payout script
+  - ProUpRevTx: ProTx hash
+  - AssetLockTx: credit output scripts excluding OP_RETURN

🏁 Script executed:

#!/bin/bash
# Locate the specialtx_filter header and implementation files
rg -l "specialtx_filter" -g "*.h" -g "*.cpp"

# Verify ProRegTx extraction details
rg -n "ProRegTx" -g "*.h" -g "*.cpp"

# Verify ProUpServTx extraction details
rg -n "ProUpServTx" -g "*.h" -g "*.cpp"

# Verify ProUpRegTx extraction details
rg -n "ProUpRegTx" -g "*.h" -g "*.cpp"

# Verify ProUpRevTx extraction details
rg -n "ProUpRevTx" -g "*.h" -g "*.cpp"

# Verify AssetLockTx extraction details and any OP_RETURN exclusions
rg -n "AssetLockTx" -g "*.h" -g "*.cpp"

Length of output: 37859


🏁 Script executed:

#!/bin/bash
sed -n '1,200p' src/evo/specialtx_filter.cpp

Length of output: 5243


Add detailed extraction list to release notes

To ensure the release notes accurately reflect which fields are indexed (and the OP_RETURN exclusion), please update doc/release-notes-6825.md (around lines 7–14) by inserting the following block immediately after the list of transaction types:

- AssetLockTx (platform credit outputs)
- A versioning system has been added to detect incompatible indexes on startup
- The index version is now 2 (previously unversioned)
+
+Extracted elements (parity with bloom-filter handling):
+ - ProRegTx: collateral outpoint, owner key ID, voting key ID, payout script
+ - ProUpServTx: ProTx hash, operator payout script
+ - ProUpRegTx: ProTx hash, voting key ID, payout script
+ - ProUpRevTx: ProTx hash
+ - AssetLockTx: credit output scripts (excluding OP_RETURN)

This exactly matches the implementation in src/evo/specialtx_filter.cpp.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 LanguageTool

[grammar] ~7-~7: There might be a mistake here.
Context: ...s fields from Dash special transactions: - ProRegTx (masternode registration) - P...

(QB_NEW_EN)


[grammar] ~8-~8: There might be a mistake here.
Context: ...: - ProRegTx (masternode registration) - ProUpServTx (masternode service updates)...

(QB_NEW_EN)


[grammar] ~9-~9: There might be a mistake here.
Context: ...ProUpServTx (masternode service updates) - ProUpRegTx (masternode operator updates)...

(QB_NEW_EN)


[grammar] ~10-~10: There might be a mistake here.
Context: ...ProUpRegTx (masternode operator updates) - ProUpRevTx (masternode revocation) - A...

(QB_NEW_EN)


[grammar] ~11-~11: There might be a mistake here.
Context: ...) - ProUpRevTx (masternode revocation) - AssetLockTx (platform credit outputs) - ...

(QB_NEW_EN)


[grammar] ~12-~12: There might be a mistake here.
Context: ... - AssetLockTx (platform credit outputs) - A versioning system has been added to de...

(QB_NEW_EN)

🤖 Prompt for AI Agents
In doc/release-notes-6825.md around lines 7 to 14, the release notes list the
Dash special transaction types but are missing the detailed field extraction
list (and note about excluding OP_RETURN) that mirrors the implementation in
src/evo/specialtx_filter.cpp; update the file by inserting the exact extraction
list block immediately after the transaction-type list so the notes enumerate
which fields are indexed for each special tx and explicitly state that OP_RETURN
outputs are not indexed, matching the code behavior.

UdjinM6
UdjinM6 previously approved these changes Aug 20, 2025
Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self-ACK b2522a2

@PastaPastaPasta
Copy link
Member Author

LGTM

if (!m_db->Read(DB_VERSION, version)) {
return error("%s: Failed to read %s index version from database", __func__, GetName());
}
if (version > CURRENT_VERSION) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why exactly do we need to check DB_VERSION twice - once in a constructor BlockFilterIndex and second time here in BlockFilterIndex::Init?

    for (const auto& filter_type : g_enabled_filter_types) {                                                                              
        InitBlockFilterIndex(filter_type, filter_index_cache, false, fReindex);                                                           
        if (!GetBlockFilterIndex(filter_type)->Start(chainman.ActiveChainstate())) {                                                      
            return false;                                                                                                                 
        }                                                                                                                                 
    }

Init is called externally when object already exist and check & reset of db is already done, isn't it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see nothing in BlockFilterIndex::BlockFilterIndex that checks version

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

    // Check version
    int version = 0;
    if (!m_db->Read(DB_VERSION, version) || version < CURRENT_VERSION) { <----
        // No version or too old version means we need to start from scratch
        LogPrintf("%s: Outdated or no version blockfilter, starting from scratch\n", __func__);
        m_db.reset();
        m_db = std::make_unique<BaseIndex::DB>(path / "db", n_cache_size, f_memory, /*f_wipe=*/true); <-----
        m_db->Write(DB_VERSION, CURRENT_VERSION);
    }

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In ctor we check for an outdated version. Here we check if a version we read is from the future.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could probably drop if (m_db->Exists(DB_VERSION)) { because it must exist at this point

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-    if (!m_db->Read(DB_VERSION, version) || version < CURRENT_VERSION) {
+    if (!m_db->Read(DB_VERSION, version) || version != CURRENT_VERSION) {

could it be like that in ctor and that's all? @UdjinM6

Copy link

@UdjinM6 UdjinM6 Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends. Do we want to recreate it if user started an older version temporary (will have to recreate it once again when he is back to the newer one) or do we want to just quit (he can disable filter for some time to be able to run an older node)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, it seems as current solution is fine then

# Breaking Change: Block Filter Index Format Update

## Summary
The compact block filter index format has been updated to include Dash special transaction data, providing feature parity with bloom filters for SPV client support. This change is **incompatible** with existing blockfilter indexes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be mentioned that it will be automatically recreated, shouldn't it?

Copy link
Collaborator

@knst knst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 4b90441

Copy link

@UdjinM6 UdjinM6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK 4b90441

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
doc/release-notes-6825.md (1)

7-12: Add the explicit list of extracted fields (and OP_RETURN exclusion) to match implementation.

Release notes should enumerate exactly which elements are added per special tx type, including the OP_RETURN exclusion for AssetLockTx, to give SPV implementers precise expectations and to mirror evo/specialtx_filter.{h,cpp} and bloom-filter parity. Insert immediately after the transaction-type bullets.

   - ProUpRevTx (masternode revocation)
   - AssetLockTx (platform credit outputs)
 
+Extracted elements (parity with bloom-filter handling):
+ - ProRegTx: collateral outpoint, owner key ID, voting key ID, payout script
+ - ProUpServTx: ProTx hash, operator payout script
+ - ProUpRegTx: ProTx hash, voting key ID, payout script
+ - ProUpRevTx: ProTx hash
+ - AssetLockTx: credit output scripts (excluding OP_RETURN)
+
 - A versioning system has been added to detect incompatible indexes on startup
🧹 Nitpick comments (5)
doc/release-notes-6825.md (5)

13-14: Be explicit about the version number and detection behavior.

Tighten this into a single, precise sentence that states CURRENT_VERSION and the startup action on mismatch.

-- A versioning system has been added to detect incompatible indexes on startup
-- The index version is now 2 (previously unversioned)
+- The block filter index now uses versioning (CURRENT_VERSION = 2). On startup, a missing or incompatible version triggers an automatic wipe-and-rebuild of the index.

1-1: Consistency: use “block filter” (two words) throughout.

Elsewhere in the codebase and upstream docs this is styled as “block filter index.” Align the heading for consistency.

-# Breaking Change: Block Filter Index Format Update
+# Breaking Change: Block filter index format update

7-12: Minor grammar/punctuation fixes in the bullets.

Add periods or drop trailing punctuation consistently; also make the parenthetical phrases singular.

-  - ProRegTx (masternode registration)
-  - ProUpServTx (masternode service updates)
-  - ProUpRegTx (masternode operator updates)
-  - ProUpRevTx (masternode revocation)
-  - AssetLockTx (platform credit outputs)
+  - ProRegTx (masternode registration)
+  - ProUpServTx (masternode service update)
+  - ProUpRegTx (masternode operator update)
+  - ProUpRevTx (masternode revocation)
+  - AssetLockTx (platform credit outputs)

16-19: Add an “Upgrade notes” subsection for operators.

A short operational blurb reduces support load and mirrors common release-note patterns.

 ## Benefits
 - SPV clients can now detect and track Dash-specific transactions
 - Feature parity between bloom filters and compact block filters
 - Protection against serving incorrect filter data to light clients
+
+## Upgrade notes
+- If you run with a block filter index enabled, the node will rebuild the index to version 2 on first startup after upgrade.
+- During the rebuild, compact filters will not be served to peers; expect elevated disk I/O and CPU usage.
+- No manual deletion of <datadir>/indexes/blockfilter is required.

4-4: Document auto-recreation behavior and its operational impact

– File: doc/release-notes-6825.md (around line 4)
– Confirmed via BlockFilterIndex constructor (lines 110–160) that on startup if DB_VERSION is missing or older than CURRENT_VERSION (2), the index is wiped (f_wipe=true) and rebuilt from scratch.

Suggested release-notes update:

-The compact block filter index format has been updated to include Dash special transaction data, providing feature parity with bloom filters for SPV client support. This change is **incompatible** with existing blockfilter indexes. Existing blockfilter indexes will automatically be re-created with the new version.
+The compact block filter index format has been updated to include Dash special transaction data, providing feature parity with bloom filters for SPV client support. This change is **incompatible** with existing block filter indexes. If an incompatible index is detected on startup, it will be automatically wiped and rebuilt at version 2.
+Note: Rebuilding scans the entire blockchain and may take several hours on a full node. During the rebuild, block filters will be temporarily unavailable to peers.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b2522a2 and 4b90441.

📒 Files selected for processing (1)
  • doc/release-notes-6825.md (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
doc/**

📄 CodeRabbit inference engine (CLAUDE.md)

Unless specifically prompted, avoid making changes to the doc directory (documentation)

Files:

  • doc/release-notes-6825.md
🧠 Learnings (1)
📓 Common learnings
Learnt from: kwvg
PR: dashpay/dash#6543
File: src/wallet/receive.cpp:240-251
Timestamp: 2025-02-06T14:34:30.466Z
Learning: Pull request #6543 is focused on move-only changes and refactoring, specifically backporting from Bitcoin. Behavior changes should be proposed in separate PRs.
🪛 LanguageTool
doc/release-notes-6825.md

[grammar] ~6-~6: There might be a mistake here.
Context: ...h the new version. ## Technical Details - The blockfilter index now includes field...

(QB_NEW_EN)


[grammar] ~7-~7: There might be a mistake here.
Context: ...s fields from Dash special transactions: - ProRegTx (masternode registration) - P...

(QB_NEW_EN)


[grammar] ~8-~8: There might be a mistake here.
Context: ...: - ProRegTx (masternode registration) - ProUpServTx (masternode service updates)...

(QB_NEW_EN)


[grammar] ~9-~9: There might be a mistake here.
Context: ...ProUpServTx (masternode service updates) - ProUpRegTx (masternode operator updates)...

(QB_NEW_EN)


[grammar] ~10-~10: There might be a mistake here.
Context: ...ProUpRegTx (masternode operator updates) - ProUpRevTx (masternode revocation) - A...

(QB_NEW_EN)


[grammar] ~11-~11: There might be a mistake here.
Context: ...) - ProUpRevTx (masternode revocation) - AssetLockTx (platform credit outputs) - ...

(QB_NEW_EN)


[grammar] ~12-~12: There might be a mistake here.
Context: ... - AssetLockTx (platform credit outputs) - A versioning system has been added to de...

(QB_NEW_EN)


[grammar] ~16-~16: There might be a mistake here.
Context: ... 2 (previously unversioned) ## Benefits - SPV clients can now detect and track Das...

(QB_NEW_EN)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: check_merge
  • GitHub Check: predict_conflicts
  • GitHub Check: linux64_multiprocess-build / Build source
  • GitHub Check: linux64-build / Build source
  • GitHub Check: linux64_sqlite-build / Build source
  • GitHub Check: linux64_ubsan-build / Build source
  • GitHub Check: linux64_fuzz-build / Build source
  • GitHub Check: win64-build / Build source
  • GitHub Check: linux64_tsan-build / Build source
  • GitHub Check: linux64_nowallet-build / Build source

@UdjinM6 UdjinM6 added this to the 23 milestone Aug 21, 2025
@PastaPastaPasta PastaPastaPasta merged commit 57884f0 into dashpay:develop Aug 23, 2025
36 of 43 checks passed
@PastaPastaPasta PastaPastaPasta deleted the feat/special-tx-compact-filters branch August 23, 2025 16:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet