-
Couldn't load subscription status.
- Fork 2.9k
chore(compression): disable merkleized compression for now #3053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Temporarily disables merkleized compression by removing MerkleizedTableColumn implementations and related wrappers, and updates storage to use a plain CompressionColumn.
- Removes
fault-provingfeature references in Cargo.toml to disable that code path. - Strips out all
MerkleizedTableColumnimpls andMerkleized<>aliases in the compression service. - Implements
StorageColumnforCompressionColumnand updates trait bounds/tests to use it directly.
Reviewed Changes
Copilot reviewed 23 out of 24 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| tests/Cargo.toml | Removed fault-proving entries to disable merkleized tests. |
| crates/services/compression/src/storage/column.rs | Dropped MerkleizedColumnOf, added StorageColumn impl for enum. |
| crates/services/compression/src/storage.rs | Changed table aliases from Merkleized<T> to direct table types. |
| crates/services/compression/src/service.rs | Updated test imports and MockStorage type to use CompressionColumn. |
| crates/services/compression/src/ports/compression_storage.rs | Updated CompressionStorage bounds to use CompressionColumn. |
| crates/services/compression/src/lib.rs | Added #![allow(unexpected_cfgs)] and commented out fault-proving. |
Comments suppressed due to low confidence (2)
crates/services/compression/src/storage/column.rs:55
- [nitpick] Avoid naming variables
strsince it shadows the built-instrtype; consider renaming this variable tosornamefor clarity.
let str: &str = self.into();
crates/services/compression/src/storage/column.rs:55
- CompressionColumn only implements Display, not
Into<&str>, soself.into()will not compile. Change toformat!("{}", self)or derivestrum::IntoStaticStrand then useself.into().
let str: &str = self.into();
|
|
||
| /// Merkleized Address table type alias | ||
| pub type Address = Merkleized<address::Address>; | ||
| pub type Address = address::Address; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, curious how it will affect the storage. It looks like we still will have trash in the Merkle-related tables. Maybe we need to come up with the solution how to cleanup them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah we can wipe compression database manually I think?
| )] | ||
| pub enum CompressionColumn { | ||
| /// Metadata table | ||
| Metadata = 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a good idea to add metadata and change the table numbers now? Doing that over an existing database means database will be corrupted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fairly certain it will reuse metadata table from before, we will have to prune the compression database first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should get confirmation on this. @rymnc are you saying that since we didn't put the SMT's into the mainnet compression yet, this is essentially reverting a breaking change that hasn't been deployed yet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's already in mainnet, I can clone a devnet database and see how it works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i checked locally running a testnet node with 0.43.2 with compression enabled and then after with this branch. queries for compressed blocks that were processed with 0.43.2 yield null, but the rest come through as normal, no errors in syncing or on graphql. this would be the expected behaviour, so it's fine in my opinion.
- scale down compression sentry
- delete compression database
- configure --da-compression-starting-height and new image
- scale up compression sentry
it will work without errors.
|
things are looking more stable re: storage so its fine for now, we have a mitigation plan in case the storage increases rapidly |
Warning
this introduces some tech debt on the fault proving feature flag
#[allow(unexpected_cfgs)]in compression and compression service crateThis PR is meant to be reverted later when we continue work on FP.
Linked Issues/PRs
Description
Checklist
Before requesting review
After merging, notify other teams
[Add or remove entries as needed]