A comprehensive transaction indexer built with Ponder.sh that indexes every transaction that has ever occurred on your private EV-compatible chain.
- Complete Transaction Indexing: Indexes every transaction from genesis block (block 0) onwards
- Block Data: Captures comprehensive block information including gas usage, timestamps, and transaction counts
- Transaction Details: Records all transaction data including gas prices, gas usage, input data, and execution status
- REST API: Provides easy-to-use REST endpoints for querying indexed data
- GraphQL API: Full GraphQL support for complex queries
- Real-time Indexing: Continuously indexes new blocks as they are mined
Set your RPC URL in your environment:
export PONDER_RPC_URL_1="http://your-private-chain-rpc-url:8545"- Install dependencies:
npm install- Set your CHAIN ID and RPC URL :
export PONDER_CHAIN_ID_1="1234"
export PONDER_RPC_URL_1="http://your-chain-rpc:8545"- Start the indexer:
npm run devThe indexer will start from block 0 and index every transaction on your chain.
GET /transactions?limit=100&offset=0
Returns paginated list of transactions.
GET /transactions/{hash}
Returns specific transaction details.
GET /blocks?limit=100&offset=0
Returns paginated list of blocks.
GET /blocks/{number}
Returns specific block details.
GET /stats
Returns indexing statistics including total transactions, blocks, and latest block info.
Access the GraphQL playground at:
http://localhost:42069/graphql
Example GraphQL queries:
# Get recent transactions
query {
transactionss(limit: 10, orderBy: "timestamp", orderDirection: "desc") {
items {
hash
from
to
value
blockNumber
timestamp
gasPrice
gasUsed
gasLimit
input
nonce
status
}
}
}
# Get blocks with transaction counts
query {
blockss(limit: 10, orderBy: "number", orderDirection: "desc") {
items {
number
hash
timestamp
parentHash
transactionCount
gasUsed
gasLimit
}
}
}
# Get a specific transaction by hash
query {
transactions(hash: "0xe4c10f3946e20156207b7215b03a62a130ef9e98e5ff7511153091f69b5f451e") {
hash
from
to
value
blockNumber
timestamp
status
}
}
# Get transactions from a specific block
query {
transactionss(where: { blockNumber: "275" }) {
items {
hash
from
to
value
transactionIndex
}
}
}hash: Transaction hash (primary key)blockNumber: Block number containing the transactionblockHash: Hash of the containing blocktransactionIndex: Position of transaction in blockfrom: Sender addressto: Recipient address (null for contract creation)value: ETH value transferredgasPrice: Gas price usedgasUsed: Actual gas consumedgasLimit: Gas limit setinput: Transaction input datanonce: Sender noncetimestamp: Block timestampstatus: Transaction status (1 = success, 0 = failed)
hash: Block hash (primary key)number: Block numbertimestamp: Block timestampparentHash: Previous block hashgasUsed: Total gas used in blockgasLimit: Block gas limittransactionCount: Number of transactions in block
- Initial Sync: Indexing from genesis can take time depending on chain history
- RPC Limits: Ensure your RPC endpoint can handle the indexing load
- Database: Uses SQLite by default, consider PostgreSQL for production
- Memory: Transaction receipts are fetched for accurate gas usage data
The indexer logs progress as it processes blocks:
Indexed block 1234 with 15 transactions
Monitor the /stats endpoint to track indexing progress.
To index additional transaction or block data, modify:
ponder.schema.ts- Add new columnssrc/index.ts- Update indexing logicsrc/api/index.ts- Update API responses
To index only specific types of transactions, add filtering logic in src/index.ts:
// Example: Only index transactions with value > 0
if (transaction.value > 0n) {
await context.db.insert(schema.transactions).values({
// ... transaction data
});
}- RPC Connection: Ensure your RPC URL is accessible and supports the required methods
- Chain ID Mismatch: Verify the chain ID in config matches your network
- Memory Issues: For large chains, consider increasing Node.js memory limit
- Rate Limiting: Some RPC providers have rate limits that may slow indexing
Check Ponder logs for detailed error information and indexing progress.
For production use:
- Use PostgreSQL instead of SQLite
- Set up proper monitoring and alerting
- Configure backup strategies for indexed data
- Consider horizontal scaling for high-throughput chains
- Implement proper error handling and recovery mechanisms
Feel free to submit issues and enhancement requests!