Skip to content

base/benchmark

Repository files navigation

Base

Base Benchmark

Base Benchmark is a performance testing framework for Ethereum execution clients. Compare client performance, identify bottlenecks, and ensure reliability before deployment.

GitHub contributors GitHub commit activity GitHub Stars GitHub repo size GitHub

Website base.org Blog Docs Discord Twitter Base

GitHub pull requests by-label GitHub Issues

Features

  • Performance Evaluation: Test both block building and validation performance across execution clients (Geth, Reth, and more)
  • Comparative Analysis: Measure client behavior across various inputs and workloads
  • Metric Collection: Track critical metrics including submission times, latency, and throughput
  • Flexible Workloads: Configure transaction patterns to match your specific needs
  • Interactive Dashboard: Generate beautiful HTML reports with charts and run comparisons
  • Import & Merge: Combine benchmark results from multiple machines with flexible tagging

Repository Structure

.
├── Makefile              # Build and development tasks
├── go.mod                # Go module dependencies
├── benchmark/            # CLI application
│   ├── cmd/              # Main entry point
│   ├── config/           # Configuration types
│   └── flags/            # CLI flags
├── runner/               # Core benchmarking logic
│   ├── benchmark/        # Benchmark execution
│   ├── clients/          # Client integrations (Geth, Reth)
│   ├── importer/         # Run import functionality
│   ├── network/          # Network setup and management
│   └── payload/          # Transaction payload generation
├── configs/              # Benchmark configurations
│   ├── examples/         # Development and testing configs
│   └── public/           # Production-ready benchmarks
├── contracts/            # Smart contracts for testing
│   └── src/              # Solidity source files
├── report/               # Interactive dashboard
│   └── src/              # React TypeScript application
└── clients/              # Client build scripts

Prerequisites

  • Go: Version 1.21 or later. Install from go.dev
  • Foundry: For smart contract compilation. See installation guide
  • Node.js: Version 18+ for the interactive dashboard. Install from nodejs.org

Getting Started

1. Clone the Repository

git clone https://github.com/base/benchmark.git
cd benchmark
git submodule update --init --recursive

2. Build the Application

make build

The binary will be located at bin/benchmark.

3. Build Client Binaries (Optional)

To build Geth and Reth from source:

make build-binaries

Alternatively, you can specify paths to pre-built binaries when running benchmarks.

4. Run Your First Benchmark

./bin/base-bench run \
  --config ./configs/public/basic.yml \
  --root-dir ./data-dir \
  --output-dir ./output

To see available options:

./bin/base-bench run --help

5. View Results in the Interactive Dashboard

cd report/
npm install
npm run dev

Open your browser to the URL shown (typically http://localhost:5173).

Available Benchmarks

Explore the comprehensive collection of benchmark configurations:

📁 Configuration Guide - Detailed documentation of all available benchmark configurations

  • examples/ - Development and testing configurations for specific workloads
  • public/ - Production-ready benchmarks for standardized testing

Choose from storage operations, precompile tests, token workloads, mainnet simulations, and more.

Architecture

Benchmark Structure

Each benchmark consists of configurable tests with various input parameters:

payloads:
  - name: Transfer only
    id: transfer-only
    type: transfer-only

benchmarks:
  - name: Test Performance
    description: Execution Speed
    variables:
      - type: payload
        value: transfer-only
      - type: node_type
        values:
          - reth
          - geth
      - type: num_blocks
        value: 20

This configuration runs a transfer-only transaction payload against both Geth and Reth clients for 20 blocks.

Test Methodology

Each test executes a standardized workflow:

  1. Initialize a sequencer/block builder with specified gas limits
  2. Generate transactions and submit to the sequencer mempool
  3. Record all payloads via engine_forkChoiceUpdated and engine_getPayload
  4. Set up the validator node
  5. Process payloads through engine_newPayload

This approach allows precise measurement of performance characteristics for both block production and validation.

Configuration

Available Flags

NAME:
   benchmark run - run benchmark

USAGE:
   benchmark run [command options]

OPTIONS:
   --config value                  Config Path ($BASE_BENCH_CONFIG)
   --root-dir value                Root Directory ($BASE_BENCH_ROOT_DIR)
   --output-dir value              Output Directory ($BASE_BENCH_OUTPUT_DIR)
   --tx-fuzz-bin value             Transaction Fuzzer path (default: "../tx-fuzz/cmd/livefuzzer/livefuzzer")

   # Reth Configuration
   --reth-bin value                Reth binary path (default: "reth")
   --reth-http-port value          HTTP port (default: 9545)
   --reth-auth-rpc-port value      Auth RPC port (default: 9551)
   --reth-metrics-port value       Metrics port (default: 9080)

   # Geth Configuration
   --geth-bin value                Geth binary path (default: "geth")
   --geth-http-port value          HTTP port (default: 8545)
   --geth-auth-rpc-port value      Auth RPC port (default: 8551)
   --geth-metrics-port value       Metrics port (default: 8080)

   # General Options
   --proxy-port value              Proxy port (default: 8546)
   --help, -h                      Show help (default: false)

Managing Test Runs

Understanding Runs and Suites

When you view benchmark results in the interactive dashboard, you can switch between different test runs using the run switcher:

Run Switcher

Creating Test Runs

Running benchmarks adds a new suite by default:

./bin/base-bench run --config ./configs/public/basic.yml

Each execution creates a new suite entry in the run list, allowing you to track performance over time or across different configurations.

Combining Multiple Runs

Use import-runs to merge benchmark results from multiple machines or configurations:

./bin/base-bench import-runs \
  --output-dir ./output \
  ./results-from-server-1/metadata.json

Two import strategies:

  1. Add to latest suite with tags - Merge imported runs into your most recent suite, using tags to differentiate:

    # Add imported runs to the last suite with tags for differentiation
    ./bin/base-bench import-runs \
      --src-tag "instance=server-lg" \
      --dest-tag "instance=server-md" \
      --output-dir ./output \
      ./results-from-server-1/metadata.json
    
    # --src-tag fills missing tags on existing runs (won't overwrite)
    # --dest-tag applies to the imported runs
    # Useful for comparing hardware configurations within the same test run
  2. Create new separate suite - Add imported runs as an independent suite in the list:

    # Interactive mode (recommended) - prompts you to choose strategy and configure tags
    ./bin/base-bench import-runs \
      --output-dir ./output \
      ./results-from-server-1/metadata.json
    
    # Creates a new entry differentiated by BenchmarkRun ID
    # Useful for tracking performance across different code versions or time periods

Interactive Mode: Without specifying tags, the tool enters interactive mode and guides you through:

  • Choosing between adding to last suite or creating new suite
  • Configuring appropriate tags if needed
  • Confirming the import operation

This flexibility lets you organize benchmarks by hardware type, client version, or any dimension relevant to your analysis.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines on how to contribute to this project.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Built with ❤️ by Base

About

Benchmarking tool for OP stack chains

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 11