A comprehensive benchmarking framework for Ethereum JSON-RPC clients, designed to evaluate and compare the performance and behavior of different client implementations like Geth and Nethermind.
This project runs predefined RPC tests derived from the official Ethereum Execution APIs spec, generates standardized performance metrics, checks for response consistency, and provides both historic tracking and real-time analysis through a modern web dashboard.
- Performance Benchmarking: Benchmark and compare Ethereum clients under realistic load using k6
- Historic Tracking: Store and analyze performance trends over time with PostgreSQL + Grafana integration
- Real-time Dashboard: Modern React UI for viewing results, trends, and comparisons
- Response Validation: Validate RPC response compatibility with ethereum/execution-apis
- Schema Validation: Check responses against official Ethereum JSON-RPC specifications
- Baseline Management: Set performance baselines and detect regressions automatically
- Multiple Output Formats: Generate HTML reports, CSV exports, and JSON data
- WebSocket Updates: Real-time updates for live monitoring
- Grafana Integration: Pre-built dashboards for time-series analysis and alerting
json-bench/
│
├── clients/ # Docker Compose setups for each client
│ ├── geth/
│ └── nethermind/
│
├── config/ # YAML test configurations
│ ├── mixed.yaml # Mixed workload benchmark
│ ├── read-heavy.yaml # Read-heavy workload benchmark
│ ├── storage-example.yaml # Historic storage configuration
│ └── param_variations.yaml
│
├── runner/ # Go benchmark runner with historic tracking
│ ├── main.go # Main CLI application
│ ├── api/ # HTTP API server and WebSocket support
│ ├── storage/ # PostgreSQL integration
│ ├── analysis/ # Trend analysis and regression detection
│ └── generator/ # K6 script generation and HTML reports
│
├── dashboard/ # React dashboard for historic analysis
│ ├── src/
│ │ ├── pages/ # Dashboard pages (trends, comparisons, baselines)
│ │ ├── components/ # Reusable UI components
│ │ └── api/ # API client for backend integration
│ └── dist/ # Built dashboard files
│
├── metrics/ # Prometheus + Grafana configuration
│ ├── grafana-provisioning/
│ ├── dashboards/ # Pre-built Grafana dashboards
│ └── docker-compose.grafana.yml
│
└── cmd/ # Additional tools
├── compare/ # Response comparison utilities
└── compare-openrpc/ # OpenRPC-based comparison tools
- Docker and Docker Compose (for client nodes and infrastructure)
- Go 1.20+ (for the benchmark runner)
- Node.js 18+ (for the React dashboard)
- k6 (for load testing - install from https://k6.io/)
- PostgreSQL (for historic tracking - included in Docker Compose)
-
Clone the repository:
git clone <repository-url> cd json-bench
-
Start the infrastructure (PostgreSQL and Prometheus):
# Start PostgreSQL and Prometheus for data storage docker-compose -f metrics/docker-compose.grafana.yml up postgres prometheus -d
-
Set up your client nodes:
# Start Ethereum client containers docker-compose up -d geth nethermind # Or configure your own endpoints in the config files
-
Run a benchmark:
# Basic benchmark (no historic tracking) go run ./runner/main.go -config ./config/mixed.yaml # With historic tracking (requires PostgreSQL) go run ./runner/main.go -config ./config/mixed.yaml -historic -storage-config ./config/storage-example.yaml # View results open results/report.html
-
Access the services:
- PostgreSQL: localhost:5432 (postgres/postgres)
- Prometheus: http://localhost:9090
- API Server: http://localhost:8080 (when running with
-api
flag) - Grafana: Manual setup required (see Grafana Integration section)
# Run a mixed workload benchmark
go run ./runner/main.go -config ./config/mixed.yaml
# Run a read-heavy benchmark
go run ./runner/main.go -config ./config/read-heavy.yaml
# Run with custom parameters
go run ./runner/main.go \
-config ./config/mixed.yaml \
-output ./custom-results \
-concurrency 10 \
-timeout 60
Enable historic tracking to store results in PostgreSQL and analyze trends over time:
# Run with historic tracking enabled
go run ./runner/main.go \
-config ./config/mixed.yaml \
-historic \
-storage-config ./config/storage-example.yaml
# Run in historic analysis mode (no new benchmark)
go run ./runner/main.go \
-config ./config/mixed.yaml \
-historic-mode \
-storage-config ./config/storage-example.yaml
Start the HTTP API server for real-time data access and WebSocket updates:
# Start the API server with historic storage
go run ./runner/main.go \
-api \
-storage-config ./config/storage-example.yaml
# API will be available at http://localhost:8080
Available API endpoints:
GET /api/runs
- List historic benchmark runsGET /api/runs/:id
- Get specific run detailsGET /api/trends
- Get performance trend dataGET /api/baselines
- List performance baselinesGET /api/compare?run1=:id1&run2=:id2
- Compare two runsPOST /api/runs/:id/baseline
- Set run as baselineWS /api/ws
- WebSocket for real-time updates
The modern React dashboard provides an intuitive interface for analyzing benchmark results and trends:
# Install dashboard dependencies
cd dashboard
npm install
# Start development server
npm run dev
# Dashboard available at http://localhost:3000
# Build for production
npm run build
npm run preview
Dashboard Features:
- Dashboard Page: Overview of recent runs and performance trends
- Run Details: Detailed analysis of individual benchmark runs
- Comparison View: Side-by-side comparison of multiple runs
- Baseline Management: Set and manage performance baselines
- Trend Analysis: Interactive charts showing performance over time
- Regression Alerts: Automatic detection of performance regressions
For advanced time-series analysis and alerting, you can set up Grafana manually:
-
Install Grafana locally or use Docker:
# Using Docker docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest # Or install locally from https://grafana.com/grafana/download
-
Open Grafana: http://localhost:3000 (admin/admin)
-
Configure data sources:
- Prometheus:
- URL:
http://localhost:9090
(orhttp://prometheus:9090
if using Docker network) - Access: Server (default)
- URL:
- PostgreSQL (for historic data):
- Host:
localhost:5432
(orpostgres:5432
if using Docker network) - Database:
jsonrpc_bench
- User:
postgres
- Password:
postgres
- SSL Mode:
disable
- Host:
- Prometheus:
-
Import dashboards from
metrics/dashboards/
(if available) -
Create custom dashboards for:
- Client performance comparison
- Method-specific latency trends
- Error rate monitoring
- System resource usage
- Historic trend analysis
-
Set up alerting for performance regressions and system issues
Compare JSON-RPC responses across different clients to identify inconsistencies:
# Compare using YAML configuration
go run ./cmd/compare/main.go -config ./config/compare.yaml
# Compare using OpenRPC specification
go run ./cmd/compare-openrpc/main.go \
-spec https://raw.githubusercontent.com/ethereum/execution-apis/main/openrpc.json \
-clients "geth:http://localhost:8545,nethermind:http://localhost:8546"
# Compare specific methods with parameter variations
go run ./cmd/compare-openrpc/main.go \
-spec https://raw.githubusercontent.com/ethereum/execution-apis/main/openrpc.json \
-variations ./config/param_variations.yaml \
-clients "geth:http://localhost:8545,nethermind:http://localhost:8546" \
-filter "eth_call,eth_getBalance"
# View comparison results
open comparison-results/comparison-report.html
Configure PostgreSQL storage for historic tracking. Choose the appropriate configuration file based on your setup:
Local Development (outside Docker):
# Use storage-example.yaml for local PostgreSQL connection
go run ./runner/main.go -config ./config/mixed.yaml -historic -storage-config ./config/storage-example.yaml
Docker Environment:
# Use storage-docker.yaml when running inside Docker containers
# (This config uses 'postgres' hostname which only exists in Docker network)
docker run ... -storage-config ./config/storage-docker.yaml
Configuration Examples:
# config/storage-example.yaml (for local development)
historic_path: "./historic"
enable_historic: true
postgresql:
host: "localhost" # Use localhost when running outside Docker
port: 5432
database: "jsonrpc_bench"
username: "postgres"
password: "postgres"
ssl_mode: "disable"
grafana:
metrics_table: "benchmark_metrics"
runs_table: "benchmark_runs"
retention_policy:
metrics_retention: "30d"
aggregated_retention: "90d"
# config/storage-docker.yaml (for Docker environment)
historic_path: "/app/historic"
enable_historic: true
postgresql:
host: "postgres" # Use service name when running in Docker
port: 5432
database: "jsonrpc_bench"
username: "postgres"
password: "postgres"
ssl_mode: "disable"
Set performance baselines to detect regressions:
# Set a run as baseline via API
curl -X POST http://localhost:8080/api/runs/20250103-120000-abc123/baseline \
-H "Content-Type: application/json" \
-d '{"name": "Production Baseline", "description": "Post-optimization baseline"}'
# Compare current run against baseline
curl "http://localhost:8080/api/compare?run1=baseline&run2=20250103-130000-def456"
Create custom benchmark configurations:
# Custom benchmark configuration
test_name: "Custom Load Test"
description: "High-load test for production validation"
clients:
- name: "geth"
url: "http://localhost:8545"
- name: "nethermind"
url: "http://localhost:8546"
load_test:
target_rps: 500
duration: "5m"
ramp_duration: "30s"
endpoints:
- method: "eth_blockNumber"
weight: 20
- method: "eth_getBalance"
params: ["0x742d35Cc641C0532a7D4567bb19f68cE3FdD72cD", "latest"]
weight: 40
Test methods with different parameter sets:
# config/param_variations.yaml
eth_call:
- [{"to": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"}, "latest"]
- [{"to": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"}, "pending"]
eth_getBalance:
- ["0x742d35Cc641C0532a7D4567bb19f68cE3FdD72cD", "latest"]
- ["0x742d35Cc641C0532a7D4567bb19f68cE3FdD72cD", "pending"]
Deploy the entire stack using Docker:
# Full stack deployment
docker-compose -f metrics/docker-compose.grafana.yml up -d
# Build and deploy the dashboard
cd dashboard
docker build -t jsonrpc-bench-dashboard .
docker run -p 3000:80 jsonrpc-bench-dashboard
Configure the application using environment variables:
# Database configuration
export DATABASE_URL="postgres://user:password@localhost:5432/jsonrpc_bench"
# API configuration
export API_PORT=8080
export LOG_LEVEL=info
# Dashboard configuration (in dashboard/.env)
VITE_API_URL=http://localhost:8080
VITE_WS_URL=ws://localhost:8080/api/ws
Set up alerts for performance regressions:
- Configure notification channels (Slack, Discord, email)
- Set alert rules for:
- Latency increases > 20%
- Error rate > 1%
- Throughput decreases > 15%
- Enable alert evaluation in Grafana settings
Configure webhooks for CI/CD integration:
# Environment variables for webhook notifications
export DISCORD_WEBHOOK_URL="https://discord.com/api/webhooks/..."
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/..."
export GITHUB_WEBHOOK_URL="https://api.github.com/repos/owner/repo/dispatches"
The benchmark runner generates multiple output formats:
- HTML Reports: Interactive reports with charts and metrics
- JSON Data: Machine-readable benchmark results
- CSV Exports: For spreadsheet analysis
- Grafana Dashboards: Time-series visualizations
- Historic Analysis: Trend reports and regression detection
- Follow Go conventions for backend code
- Use TypeScript for frontend development
- Add tests for new features
- Update documentation for API changes
- Ensure backward compatibility for configuration files
This project is licensed under the MIT License - see the LICENSE file for details.