Skip to content

Conversation

@abdulazeem-tk4vr
Copy link

@abdulazeem-tk4vr abdulazeem-tk4vr commented Oct 30, 2025

Description

Optimize eth_feeHistory performance and reduce redundant RPC calls.

Related to :
#778
https://linear.app/cosmoslabs/issue/EVM-373/investigate-feehistory-performance-optimizations

Key changes:

  • Remove duplicate BlockResults fetch per block by replacing GetBlockByNumber with RPCBlockFromCometBlock after fetching Comet BlockResults.
  • Add lightweight height-keyed caches:
    • Comet block cache (height → ResultBlock)
    • Comet block results cache (height → ResultBlockResults)
    • FeeMarket params cache (height → Params)
    • Consensus gas limit cache
  • Add simple benchmark scripts to time eth_feeHistory locally.

Most critical files to review:

  • evm/rpc/backend/chain_info.go
  • evm/rpc/backend/comet.go
  • evm/rpc/backend/backend.go
  • evm/rpc/backend/utils.go

Flows that originate from FeeHistory and need caching :

  • cometBlockCache (ResultBlock)
    Flow: FeeHistory → CometBlockByNumber → cache read/fill.
  • cometBlockResultsCache (ResultBlockResults)
    Flow: FeeHistory → CometBlockResultByNumber → cache read/fill.
  • feeParamsCache (FeeMarket params)
    Flow: FeeHistory → ProcessBlock → getFeeMarketParamsAtHeight → cache read/fill → CalcBaseFee.
  • consensusGasLimitCache (consensus gas limit)
    Flow: FeeHistory → RPCBlockFromCometBlock → EthBlockFromCometBlock → BlockMaxGasAtHeight → cache read/fill → header.GasLimit → ProcessBlock uses header.

Notes:

  • No behavior change to results; only fewer RPCs and faster repeated queries.
  • Debug cache logs were temporarily added for validation and then removed.
  • Local testing using performance scripts, makefile unit tests, and result comparison looks good.

Closes: #778


Author Checklist

I have...

  • tackled an existing issue or discussed with a team member
  • Left instructions on how to review the changes
    • Verify eth_feeHistory responses (fields/lengths/values) for several fixed heights.
    • Confirm single BlockResults fetch per block in the FeeHistory loop:
      • rpc/backend/chain_info.go: Backend.FeeHistory now uses RPCBlockFromCometBlock(cometBlock, cometBlockResult, true) instead of GetBlockByNumber.
    • Caching changes to validate:
      • FeeMarket params cache:
        • rpc/backend/backend.go: Backend.getFeeMarketParamsAtHeight(height int64) + height-keyed cache on Backend.
      • Comet block/results caches:
        • rpc/backend/comet.go (or rpc/backend/blocks.go, per repo layout): Backend.CometBlockByNumber and Backend.CometBlockResultByNumber read/write height-keyed caches.
      • Consensus gas-limit cache (new):
        • rpc/backend/backend.go: Backend: BlockMaxGasAtHeight (ctx, height) + cached map.
      • Concurrency/bounds:
        • Caches are map+sync.RWMutex on Backend; soft bound ≈ JSONRPC.FeeHistoryCap × 2.
    • Performance check:
      • Run eth_feeHistory twice against the same fixed newest block (not “latest”); the second call should be faster (cache hits).
  • targeted the main branch

Implements caching for CometBFT block results and fee market parameters,
reducing the number of direct RPC calls. This significantly improves
performance by storing frequently accessed data in memory and reusing it.

The cache is bound to `FeeHistoryCap * 2` to prevent unbounded memory growth.

Uses a read/write mutex to ensure thread-safe access to the cache.

chore : improve formatting
This commit optimizes the FeeHistory RPC method by reusing
the CometBFT block and result data already fetched. This
avoids redundant data fetching and improves performance.

Additionally, this commit adds benchmark scripts for
feeHistory to measure the impact of the caching.

chore : improve formatting again

if height != nil {
b.cacheMu.Lock()
if cap := int(b.Cfg.JSONRPC.FeeHistoryCap) * 2; cap > 0 && len(b.cometBlockResultsCache) >= cap {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we multiply the cap by 2 here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tl;dr it gives better hit rates for overlapping requests where max no. of blocks are called for a reasonable cap size.

Well I noticed that FeeHistoryCap limits a single eth_feeHistory call to a default of 100 blocks. So in a possible worst case scenario where a user is indexing the network for details, consecutive requests often overlap because "latest" moves forward.

FeeHistoryCap = 100
Cache size = 200 entries

Request 1: blocks 1000-1099
Cache stores: blocks 1000-1099 (100 entries)

Request 2 (few seconds later): blocks 1050-1149
Cache hits: blocks 1050-1099 (50 hits)
Cache stores: blocks 1100-1149 (50 new entries)
Total cached: 150 entries (well within the 200 limit)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a user keeps making a request for 100 entries in a random range, we will lose the previous context. Then this isnt that helpful.

I was just thinking that if there were requests x and y in completely different ranges, y would override the cache with just FeeHistoryCap. But if again somehow, x was requested because y was a mistake, then there is a warm cache with FeeHistoryCap *2.

this is more like a safety check, so that we provide some leeway for older entries to be there. I think either way is good enough to get the job done.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@technicallyty please do let me know what you think about this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will try to carve out time to look at this this week

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I appreciate it. If it makes your job any easier, the existing RPC unit tests are working with the added features. I dont think it breaks anything.

Implements a height-keyed cache for consensus block gas limits. This improves performance by reducing the number of queries to the consensus parameters, especially when fetching multiple blocks within a similar height range.
The cache is pruned to align with the fee history window, preventing unbounded memory usage.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Investigate FeeHistory performance optimizations

2 participants