-
Notifications
You must be signed in to change notification settings - Fork 113
EVM-373 : Investigate FeeHistory performance optimizations #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Implements caching for CometBFT block results and fee market parameters, reducing the number of direct RPC calls. This significantly improves performance by storing frequently accessed data in memory and reusing it. The cache is bound to `FeeHistoryCap * 2` to prevent unbounded memory growth. Uses a read/write mutex to ensure thread-safe access to the cache. chore : improve formatting
This commit optimizes the FeeHistory RPC method by reusing the CometBFT block and result data already fetched. This avoids redundant data fetching and improves performance. Additionally, this commit adds benchmark scripts for feeHistory to measure the impact of the caching. chore : improve formatting again
|
|
||
| if height != nil { | ||
| b.cacheMu.Lock() | ||
| if cap := int(b.Cfg.JSONRPC.FeeHistoryCap) * 2; cap > 0 && len(b.cometBlockResultsCache) >= cap { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we multiply the cap by 2 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tl;dr it gives better hit rates for overlapping requests where max no. of blocks are called for a reasonable cap size.
Well I noticed that FeeHistoryCap limits a single eth_feeHistory call to a default of 100 blocks. So in a possible worst case scenario where a user is indexing the network for details, consecutive requests often overlap because "latest" moves forward.
FeeHistoryCap = 100
Cache size = 200 entries
Request 1: blocks 1000-1099
Cache stores: blocks 1000-1099 (100 entries)
Request 2 (few seconds later): blocks 1050-1149
Cache hits: blocks 1050-1099 (50 hits)
Cache stores: blocks 1100-1149 (50 new entries)
Total cached: 150 entries (well within the 200 limit)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a user keeps making a request for 100 entries in a random range, we will lose the previous context. Then this isnt that helpful.
I was just thinking that if there were requests x and y in completely different ranges, y would override the cache with just FeeHistoryCap. But if again somehow, x was requested because y was a mistake, then there is a warm cache with FeeHistoryCap *2.
this is more like a safety check, so that we provide some leeway for older entries to be there. I think either way is good enough to get the job done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@technicallyty please do let me know what you think about this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i will try to carve out time to look at this this week
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I appreciate it. If it makes your job any easier, the existing RPC unit tests are working with the added features. I dont think it breaks anything.
Implements a height-keyed cache for consensus block gas limits. This improves performance by reducing the number of queries to the consensus parameters, especially when fetching multiple blocks within a similar height range. The cache is pruned to align with the fee history window, preventing unbounded memory usage.
Description
Optimize eth_feeHistory performance and reduce redundant RPC calls.
Related to :
#778
https://linear.app/cosmoslabs/issue/EVM-373/investigate-feehistory-performance-optimizations
Key changes:
Most critical files to review:
Flows that originate from FeeHistory and need caching :
Flow: FeeHistory → CometBlockByNumber → cache read/fill.
Flow: FeeHistory → CometBlockResultByNumber → cache read/fill.
Flow: FeeHistory → ProcessBlock → getFeeMarketParamsAtHeight → cache read/fill → CalcBaseFee.
Flow: FeeHistory → RPCBlockFromCometBlock → EthBlockFromCometBlock → BlockMaxGasAtHeight → cache read/fill → header.GasLimit → ProcessBlock uses header.
Notes:
Closes: #778
Author Checklist
I have...
mainbranch