Skip to content

Conversation

@rymnc
Copy link
Member

@rymnc rymnc commented Jul 11, 2025

Warning

this introduces some tech debt on the fault proving feature flag

  1. introduced #[allow(unexpected_cfgs)] in compression and compression service crate
  2. disabled instances of usage of fault proving feature for the above 2 crates

This PR is meant to be reverted later when we continue work on FP.

Linked Issues/PRs

Description

Checklist

  • Breaking changes are clearly marked as such in the PR description and changelog
  • New behavior is reflected in tests
  • The specification matches the implemented behavior (link update PR if changes are needed)

Before requesting review

  • I have reviewed the code myself
  • I have created follow-up issues caused by this PR and linked them here

After merging, notify other teams

[Add or remove entries as needed]

@rymnc rymnc requested a review from Copilot July 11, 2025 16:59
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Temporarily disables merkleized compression by removing MerkleizedTableColumn implementations and related wrappers, and updates storage to use a plain CompressionColumn.

  • Removes fault-proving feature references in Cargo.toml to disable that code path.
  • Strips out all MerkleizedTableColumn impls and Merkleized<> aliases in the compression service.
  • Implements StorageColumn for CompressionColumn and updates trait bounds/tests to use it directly.

Reviewed Changes

Copilot reviewed 23 out of 24 changed files in this pull request and generated no comments.

Show a summary per file
File Description
tests/Cargo.toml Removed fault-proving entries to disable merkleized tests.
crates/services/compression/src/storage/column.rs Dropped MerkleizedColumnOf, added StorageColumn impl for enum.
crates/services/compression/src/storage.rs Changed table aliases from Merkleized<T> to direct table types.
crates/services/compression/src/service.rs Updated test imports and MockStorage type to use CompressionColumn.
crates/services/compression/src/ports/compression_storage.rs Updated CompressionStorage bounds to use CompressionColumn.
crates/services/compression/src/lib.rs Added #![allow(unexpected_cfgs)] and commented out fault-proving.
Comments suppressed due to low confidence (2)

crates/services/compression/src/storage/column.rs:55

  • [nitpick] Avoid naming variables str since it shadows the built-in str type; consider renaming this variable to s or name for clarity.
        let str: &str = self.into();

crates/services/compression/src/storage/column.rs:55

  • CompressionColumn only implements Display, not Into<&str>, so self.into() will not compile. Change to format!("{}", self) or derive strum::IntoStaticStr and then use self.into().
        let str: &str = self.into();

@rymnc rymnc marked this pull request as ready for review July 14, 2025 04:17
@rymnc rymnc requested review from a team, Dentosal, MitchTurner and xgreenx as code owners July 14, 2025 04:17
@rymnc rymnc self-assigned this Jul 14, 2025

/// Merkleized Address table type alias
pub type Address = Merkleized<address::Address>;
pub type Address = address::Address;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, curious how it will affect the storage. It looks like we still will have trash in the Merkle-related tables. Maybe we need to come up with the solution how to cleanup them

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we can wipe compression database manually I think?

)]
pub enum CompressionColumn {
/// Metadata table
Metadata = 0,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a good idea to add metadata and change the table numbers now? Doing that over an existing database means database will be corrupted

Copy link
Member Author

@rymnc rymnc Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fairly certain it will reuse metadata table from before, we will have to prune the compression database first

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should get confirmation on this. @rymnc are you saying that since we didn't put the SMT's into the mainnet compression yet, this is essentially reverting a breaking change that hasn't been deployed yet?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's already in mainnet, I can clone a devnet database and see how it works

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i checked locally running a testnet node with 0.43.2 with compression enabled and then after with this branch. queries for compressed blocks that were processed with 0.43.2 yield null, but the rest come through as normal, no errors in syncing or on graphql. this would be the expected behaviour, so it's fine in my opinion.

  1. scale down compression sentry
  2. delete compression database
  3. configure --da-compression-starting-height and new image
  4. scale up compression sentry

it will work without errors.

@rymnc rymnc requested review from Voxelot and xgreenx July 28, 2025 13:24
@rymnc rymnc closed this Oct 18, 2025
@rymnc
Copy link
Member Author

rymnc commented Oct 18, 2025

things are looking more stable re: storage so its fine for now, we have a mitigation plan in case the storage increases rapidly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants