diff --git a/.claude/commands/optimize.md b/.claude/commands/optimize.md new file mode 100644 index 0000000000..c96a99b43f --- /dev/null +++ b/.claude/commands/optimize.md @@ -0,0 +1,95 @@ +--- +name: optimize +description: Run a profiling-driven optimization loop for a specific function +argument-hint: " e.g. executeEVMTxWithGigaExecutor" +allowed-tools: + - Read + - Write + - Edit + - Glob + - Grep + - Bash + - Task + - AskUserQuestion +--- + +# Optimization Loop for: $ARGUMENTS + +You are running a profiling-driven optimization loop focused on the function `$ARGUMENTS`. + +## References + +Read `benchmark/CLAUDE.md` for benchmark commands, environment variables, profiling, and the full optimization loop steps. + +## Workflow + +Execute the optimization loop from benchmark/CLAUDE.md section "Optimization loop", but focused on `$ARGUMENTS`: + +### Phase 1: Understand the target function + +1. Find the function `$ARGUMENTS` in the codebase using Grep +2. Read the function and its callers/callees to understand the hot path +3. Identify what packages, types, and helpers it uses + +### Phase 2: Profile + +4. Run the benchmark: `GIGA_EXECUTOR=true GIGA_OCC=true benchmark/benchmark.sh` +5. Wait for it to complete (default DURATION=120s) + +### Phase 3: Analyze (focused on target function) + +6. Run pprof analysis focused on `$ARGUMENTS` and its call tree. Run these in parallel: + - CPU: `go tool pprof -top -cum -nodecount=40 /tmp/sei-bench/pprof/cpu.pb.gz 2>&1 | head -60` + - fgprof: `go tool pprof -top -cum -nodecount=40 /tmp/sei-bench/pprof/fgprof.pb.gz 2>&1 | head -60` + - Heap (alloc_space): `go tool pprof -alloc_space -top -cum -nodecount=40 /tmp/sei-bench/pprof/heap.pb.gz 2>&1 | head -60` + - Heap (alloc_objects): `go tool pprof -alloc_objects -top -cum -nodecount=40 /tmp/sei-bench/pprof/heap.pb.gz 2>&1 | head -60` + - Block: `go tool pprof -top -cum -nodecount=40 /tmp/sei-bench/pprof/block.pb.gz 2>&1 | head -60` + - Mutex: `go tool pprof -top -cum -nodecount=40 /tmp/sei-bench/pprof/mutex.pb.gz 2>&1 | head -60` +7. Use `go tool pprof -text -focus='$ARGUMENTS' /tmp/sei-bench/pprof/cpu.pb.gz` to get function-focused breakdown +8. Open flamegraphs on separate ports for the user to inspect: + - `go tool pprof -http=:8080 /tmp/sei-bench/pprof/cpu.pb.gz &` + - `go tool pprof -http=:8081 /tmp/sei-bench/pprof/fgprof.pb.gz &` + - `go tool pprof -http=:8082 -alloc_space /tmp/sei-bench/pprof/heap.pb.gz &` + +### Phase 4: Summarize and discuss + +9. Present findings to the user: + - TPS from the benchmark run (extract from `/tmp/sei-bench/tps.txt`) + - Where `$ARGUMENTS` and its callees spend the most time (CPU, wall-clock) + - Biggest allocation hotspots within the function's call tree + - Any contention (block/mutex) in the function's path + - Top 2-3 candidate optimizations with expected impact and trade-offs +10. Ask the user which optimization direction to pursue. Do NOT write any code until the user picks. + +### Phase 5: Implement + +11. Implement the chosen optimization +12. Run `gofmt -s -w` on all modified `.go` files +13. Commit the change + +### Phase 6: Compare + +14. Record the commit hash before and after the optimization +15. Run comparison: `benchmark/benchmark-compare.sh baseline= candidate=` +16. Open diff flamegraphs for the user: + - `go tool pprof -http=:8083 -diff_base /tmp/sei-bench/baseline/pprof/cpu.pb.gz /tmp/sei-bench/candidate/pprof/cpu.pb.gz &` + - `go tool pprof -http=:8084 -diff_base /tmp/sei-bench/baseline/pprof/fgprof.pb.gz /tmp/sei-bench/candidate/pprof/fgprof.pb.gz &` + - `go tool pprof -http=:8085 -diff_base /tmp/sei-bench/baseline/pprof/heap.pb.gz /tmp/sei-bench/candidate/pprof/heap.pb.gz &` + +### Phase 7: Validate + +17. Present results: + - TPS delta (baseline vs candidate) + - CPU diff: `go tool pprof -top -diff_base /tmp/sei-bench/baseline/pprof/cpu.pb.gz /tmp/sei-bench/candidate/pprof/cpu.pb.gz` + - Heap diff: `go tool pprof -alloc_space -top -diff_base /tmp/sei-bench/baseline/pprof/heap.pb.gz /tmp/sei-bench/candidate/pprof/heap.pb.gz` +18. Ask the user: keep, iterate, or revert? +19. If keep and user approves, ask whether to open a PR + +## Important rules + +- ALWAYS ask the user before writing any optimization code (step 10) +- ALWAYS ask the user before opening a PR (step 19) +- Cross-session benchmark numbers are NOT comparable. Only compare within the same `benchmark-compare.sh` run. +- Run `gofmt -s -w` on all modified Go files before committing +- If `$ARGUMENTS` is empty or not found, ask the user to provide the function name +- GC tuning (GOGC, GOMEMLIMIT, debug.SetGCPercent, debug.SetMemoryLimit) is NOT a valid optimization. Do not modify GC parameters or memory limits. Focus on reducing allocations and improving algorithmic efficiency instead. diff --git a/CLAUDE.md b/CLAUDE.md index ee69756150..6a19a87161 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -17,3 +17,7 @@ gofmt -s -l . ``` This command should produce no output if all files are properly formatted. + +## Benchmarking + +See [benchmark/CLAUDE.md](benchmark/CLAUDE.md) for benchmark usage, environment variables, and comparison workflows. diff --git a/app/abci.go b/app/abci.go index c967ac8b0b..052cfed37d 100644 --- a/app/abci.go +++ b/app/abci.go @@ -59,7 +59,7 @@ func (app *App) EndBlock(ctx sdk.Context, height int64, blockGasUsed int64) (res defer span.End() ctx = ctx.WithTraceSpanContext(spanCtx) defer telemetry.MeasureSince(time.Now(), "abci", "end_block") - ctx = ctx.WithEventManager(sdk.NewEventManager()) + ctx = ctx.WithEventManager(sdk.NewEventManager()).WithSkipGasKV() defer telemetry.MeasureSince(time.Now(), "module", "total_end_block") res.ValidatorUpdates = legacyabci.EndBlock(ctx, height, blockGasUsed, app.EndBlockKeepers) res.Events = sdk.MarkEventsToIndex(ctx.EventManager().ABCIEvents(), app.IndexEvents) diff --git a/app/app.go b/app/app.go index 2a54075628..522d07d3cd 100644 --- a/app/app.go +++ b/app/app.go @@ -9,6 +9,7 @@ import ( "fmt" "io" "math" + "math/big" "net/http" "os" "path/filepath" @@ -92,6 +93,7 @@ import ( ethtypes "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/ethclient" + ethparams "github.com/ethereum/go-ethereum/params" ethrpc "github.com/ethereum/go-ethereum/rpc" "github.com/sei-protocol/sei-chain/giga/deps/tasks" @@ -317,6 +319,36 @@ func GetWasmEnabledProposals() []wasm.ProposalType { // App extends an ABCI application, but with most of its parameters exported. // They are exported for convenience in creating helper functions, as object // capabilities aren't needed for testing. +// gigaBlockCache holds block-constant values that are identical for all txs in a block. +// Populated once before block execution, read-only during parallel execution, cleared after. +type gigaBlockCache struct { + chainID *big.Int + blockCtx vm.BlockContext + chainConfig *ethparams.ChainConfig + baseFee *big.Int + evmPool *gigaexecutor.EVMPool +} + +func newGigaBlockCache(ctx sdk.Context, keeper *gigaevmkeeper.Keeper) (*gigaBlockCache, error) { + chainID := keeper.ChainID(ctx) + gp := keeper.GetGasPool() + blockCtx, err := keeper.GetVMBlockContext(ctx, gp) + if err != nil { + return nil, err + } + sstore := keeper.GetParams(ctx).SeiSstoreSetGasEip2200 + chainConfig := evmtypes.DefaultChainConfig().EthereumConfigWithSstore(chainID, &sstore) + baseFee := keeper.GetBaseFee(ctx) + evmPool := gigaexecutor.NewEVMPool(*blockCtx, chainConfig, vm.Config{}, gigaprecompiles.AllCustomPrecompilesFailFast) + return &gigaBlockCache{ + chainID: chainID, + blockCtx: *blockCtx, + chainConfig: chainConfig, + baseFee: baseFee, + evmPool: evmPool, + }, nil +} + type App struct { *baseapp.BaseApp @@ -1371,6 +1403,14 @@ func (app *App) ProcessTxsSynchronousGiga(ctx sdk.Context, txs [][]byte, typedTx ms := ctx.MultiStore().CacheMultiStore() defer ms.Write() ctx = ctx.WithMultiStore(ms) + + // Cache block-level constants (identical for all txs in this block). + cache, cacheErr := newGigaBlockCache(ctx, &app.GigaEvmKeeper) + if cacheErr != nil { + ctx.Logger().Error("failed to build giga block cache", "error", cacheErr, "height", ctx.BlockHeight()) + return nil + } + txResults := make([]*abci.ExecTxResult, len(txs)) for i, tx := range txs { ctx = ctx.WithTxIndex(absoluteTxIndices[i]) @@ -1384,7 +1424,7 @@ func (app *App) ProcessTxsSynchronousGiga(ctx sdk.Context, txs [][]byte, typedTx } // Execute EVM transaction through giga executor - result, execErr := app.executeEVMTxWithGigaExecutor(ctx, evmMsg) + result, execErr := app.executeEVMTxWithGigaExecutor(ctx, evmMsg, cache) if execErr != nil { // Check if this is a fail-fast error (Cosmos precompile interop detected) if gigautils.ShouldExecutionAbort(execErr) { @@ -1542,15 +1582,24 @@ func (app *App) ProcessTXsWithOCCGiga(ctx sdk.Context, txs [][]byte, typedTxs [] } } - // Create OCC scheduler with giga executor deliverTx. + // Run EVM txs against a cache so we can discard all changes on fallback. + evmCtx, evmCache := app.CacheContext(ctx) + + // Cache block-level constants (identical for all txs in this block). + // Must use evmCtx (not ctx) because giga KV stores are registered in CacheContext. + cache, cacheErr := newGigaBlockCache(evmCtx, &app.GigaEvmKeeper) + if cacheErr != nil { + ctx.Logger().Error("failed to build giga block cache", "error", cacheErr, "height", ctx.BlockHeight()) + return nil, ctx + } + + // Create OCC scheduler with giga executor deliverTx capturing the cache. evmScheduler := tasks.NewScheduler( app.ConcurrencyWorkers(), app.TracingInfo, - app.gigaDeliverTx, + app.makeGigaDeliverTx(cache), ) - // Run EVM txs against a cache so we can discard all changes on fallback. - evmCtx, evmCache := app.CacheContext(ctx) evmBatchResult, evmSchedErr := evmScheduler.ProcessAll(evmCtx, evmEntries) if evmSchedErr != nil { // TODO: DeliverTxBatch panics in this case @@ -1711,14 +1760,14 @@ func (app *App) ProcessBlock(ctx sdk.Context, txs [][]byte, req BlockProcessRequ // executeEVMTxWithGigaExecutor executes a single EVM transaction using the giga executor. // The sender address is recovered directly from the transaction signature - no Cosmos SDK ante handlers needed. -func (app *App) executeEVMTxWithGigaExecutor(ctx sdk.Context, msg *evmtypes.MsgEVMTransaction) (*abci.ExecTxResult, error) { +func (app *App) executeEVMTxWithGigaExecutor(ctx sdk.Context, msg *evmtypes.MsgEVMTransaction, cache *gigaBlockCache) (*abci.ExecTxResult, error) { // Get the Ethereum transaction from the message ethTx, txData := msg.AsTransaction() if ethTx == nil || txData == nil { return nil, fmt.Errorf("failed to convert to eth transaction") } - chainID := app.GigaEvmKeeper.ChainID(ctx) + chainID := cache.chainID // Recover sender using the same logic as preprocess.go (version-based signer selection) sender, seiAddr, pubkey, recoverErr := evmante.RecoverSenderFromEthTx(ctx, ethTx, chainID) @@ -1740,34 +1789,23 @@ func (app *App) executeEVMTxWithGigaExecutor(ctx sdk.Context, msg *evmtypes.MsgE } } - // Prepare context for EVM transaction (set infinite gas meter like original flow) - ctx = ctx.WithGasMeter(sdk.NewInfiniteGasMeterWithMultiplier(ctx)) + // Prepare context for EVM transaction (set infinite gas meter like original flow). + // Skip gaskv wrapping since gas metering is infinite - saves ~3GB allocs per 30s. + ctx = ctx.WithGasMeter(sdk.NewInfiniteGasMeterWithMultiplier(ctx)).WithSkipGasKV() // Create state DB for this transaction stateDB := gigaevmstate.NewDBImpl(ctx, &app.GigaEvmKeeper, false) defer stateDB.Cleanup() - // Get gas pool + // Get gas pool (mutated per tx, cannot be cached) gp := app.GigaEvmKeeper.GetGasPool() - // Get block context - blockCtx, blockCtxErr := app.GigaEvmKeeper.GetVMBlockContext(ctx, gp) - if blockCtxErr != nil { - return &abci.ExecTxResult{ - Code: 1, - Log: fmt.Sprintf("failed to get block context: %v", blockCtxErr), - }, nil - } - - // Get chain config - sstore := app.GigaEvmKeeper.GetParams(ctx).SeiSstoreSetGasEip2200 - cfg := evmtypes.DefaultChainConfig().EthereumConfigWithSstore(app.GigaEvmKeeper.ChainID(ctx), &sstore) - - // Create Giga executor VM - gigaExecutor := gigaexecutor.NewGethExecutor(*blockCtx, stateDB, cfg, vm.Config{}, gigaprecompiles.AllCustomPrecompilesFailFast) + // Get a pooled EVM executor (reuses EVM struct + interpreter + precompile maps) + gigaExecutor := cache.evmPool.GetExecutor(stateDB) + defer cache.evmPool.PutExecutor(gigaExecutor) // Execute the transaction through giga VM - execResult, execErr := gigaExecutor.ExecuteTransaction(ethTx, sender, app.GigaEvmKeeper.GetBaseFee(ctx), &gp) + execResult, execErr := gigaExecutor.ExecuteTransaction(ethTx, sender, cache.baseFee, &gp) if execErr != nil { return &abci.ExecTxResult{ Code: 1, @@ -1886,49 +1924,52 @@ func (app *App) executeEVMTxWithGigaExecutor(ctx sdk.Context, msg *evmtypes.MsgE } // gigaDeliverTx is the OCC-compatible deliverTx function for the giga executor. -// The ctx.MultiStore() is already wrapped with VersionIndexedStore by the scheduler. -func (app *App) gigaDeliverTx(ctx sdk.Context, req abci.RequestDeliverTxV2, tx sdk.Tx, checksum [32]byte) abci.ResponseDeliverTx { - defer func() { - if r := recover(); r != nil { - // OCC abort panics are expected - the scheduler uses them to detect conflicts - // and reschedule transactions. Don't log these as errors. - if _, isOCCAbort := r.(occ.Abort); !isOCCAbort { - ctx.Logger().Error("benchmark panic in gigaDeliverTx", "panic", r, "stack", string(debug.Stack())) +// makeGigaDeliverTx returns an OCC-compatible deliverTx callback that captures the given +// block cache, avoiding mutable state on App for cache lifecycle management. +func (app *App) makeGigaDeliverTx(cache *gigaBlockCache) func(sdk.Context, abci.RequestDeliverTxV2, sdk.Tx, [32]byte) abci.ResponseDeliverTx { + return func(ctx sdk.Context, req abci.RequestDeliverTxV2, tx sdk.Tx, checksum [32]byte) abci.ResponseDeliverTx { + defer func() { + if r := recover(); r != nil { + // OCC abort panics are expected - the scheduler uses them to detect conflicts + // and reschedule transactions. Don't log these as errors. + if _, isOCCAbort := r.(occ.Abort); !isOCCAbort { + ctx.Logger().Error("benchmark panic in gigaDeliverTx", "panic", r, "stack", string(debug.Stack())) + } } - } - }() + }() - evmMsg := app.GetEVMMsg(tx) - if evmMsg == nil { - return abci.ResponseDeliverTx{Code: 1, Log: "not an EVM transaction"} - } + evmMsg := app.GetEVMMsg(tx) + if evmMsg == nil { + return abci.ResponseDeliverTx{Code: 1, Log: "not an EVM transaction"} + } - result, err := app.executeEVMTxWithGigaExecutor(ctx, evmMsg) - if err != nil { - // Check if this is a fail-fast error (Cosmos precompile interop detected) - if gigautils.ShouldExecutionAbort(err) { - // Return a sentinel response so the caller can fall back to v2. - return abci.ResponseDeliverTx{ - Code: gigautils.GigaAbortCode, - Codespace: gigautils.GigaAbortCodespace, - Info: gigautils.GigaAbortInfo, - Log: "giga executor abort: fall back to v2", + result, err := app.executeEVMTxWithGigaExecutor(ctx, evmMsg, cache) + if err != nil { + // Check if this is a fail-fast error (Cosmos precompile interop detected) + if gigautils.ShouldExecutionAbort(err) { + // Return a sentinel response so the caller can fall back to v2. + return abci.ResponseDeliverTx{ + Code: gigautils.GigaAbortCode, + Codespace: gigautils.GigaAbortCodespace, + Info: gigautils.GigaAbortInfo, + Log: "giga executor abort: fall back to v2", + } } - } - return abci.ResponseDeliverTx{Code: 1, Log: fmt.Sprintf("giga executor error: %v", err)} - } + return abci.ResponseDeliverTx{Code: 1, Log: fmt.Sprintf("giga executor error: %v", err)} + } - return abci.ResponseDeliverTx{ - Code: result.Code, - Data: result.Data, - Log: result.Log, - Info: result.Info, - GasWanted: result.GasWanted, - GasUsed: result.GasUsed, - Events: result.Events, - Codespace: result.Codespace, - EvmTxInfo: result.EvmTxInfo, + return abci.ResponseDeliverTx{ + Code: result.Code, + Data: result.Data, + Log: result.Log, + Info: result.Info, + GasWanted: result.GasWanted, + GasUsed: result.GasUsed, + Events: result.Events, + Codespace: result.Codespace, + EvmTxInfo: result.EvmTxInfo, + } } } diff --git a/app/benchmark/benchmark.go b/app/benchmark/benchmark.go index 6d454f064b..05723059cd 100644 --- a/app/benchmark/benchmark.go +++ b/app/benchmark/benchmark.go @@ -17,7 +17,7 @@ // proposalCh := gen.StartProposalChannel(ctx, benchLogger) // // The generator can be configured via JSON config files that follow the sei-load -// LoadConfig format. See scripts/scenarios/ for example configurations. +// LoadConfig format. See benchmark/scenarios/ for example configurations. package benchmark import ( diff --git a/app/benchmark_profiling.go b/app/benchmark_profiling.go new file mode 100644 index 0000000000..e5fa9f3191 --- /dev/null +++ b/app/benchmark_profiling.go @@ -0,0 +1,17 @@ +//go:build benchmark + +package app + +import "runtime" + +func init() { + // Enable block profiling: record blocking events lasting 1us or longer. + // Lower values capture more events but add overhead that can skew TPS. + // This lets /debug/pprof/block show time spent waiting on channels and mutexes. + runtime.SetBlockProfileRate(1000) + + // Enable mutex contention profiling: sample 1 in 5 contention events. + // Full capture (fraction=1) adds measurable overhead; 1/5 is a good balance. + // This lets /debug/pprof/mutex show where goroutines contend on locks. + runtime.SetMutexProfileFraction(5) +} diff --git a/benchmark/CLAUDE.md b/benchmark/CLAUDE.md new file mode 100644 index 0000000000..677e74666a --- /dev/null +++ b/benchmark/CLAUDE.md @@ -0,0 +1,225 @@ +# Benchmark + +## Pre-run analysis check + +Before running `benchmark.sh` or `benchmark-compare.sh`, read any relevant docs in `benchmark/analysis/` for the target function or path first. This keeps each run tied to prior findings and reduces repeated experimentation. + +```bash +ls benchmark/analysis/*.md +``` + +For example, for `executeEVMTxWithGigaExecutor`, review: + +```bash +sed -n '1,220p' benchmark/analysis/executeEVMTxWithGigaExecutor.md +``` + +Use the analysis notes to define your baseline, expected bottlenecks, and pass/fail criteria before launching a new benchmark. + +## Single scenario + +```bash +GIGA_EXECUTOR=true GIGA_OCC=true benchmark/benchmark.sh +``` + +By default, the benchmark runs for `DURATION=120` seconds, auto-captures all 6 profile types, extracts TPS stats, and exits. Profiles are saved to `/tmp/sei-bench/pprof/`, TPS data to `/tmp/sei-bench/tps.txt`, and the full log to `/tmp/sei-bench/output.log`. + +Use `DURATION=0` to run forever (manual capture, original behavior). + +TPS is logged every 5s as `tps=` (with ANSI color codes). For manual extraction: + +```bash +sed 's/\x1b\[[0-9;]*m//g' /tmp/sei-bench/output.log | sed -n 's/.*tps=\([0-9.]*\).*/\1/p' +``` + +Available scenarios in `benchmark/scenarios/`: `evm.json` (default), `erc20.json`, `mixed.json`, `default.json`. + +```bash +# Use a different scenario +BENCHMARK_CONFIG=benchmark/scenarios/erc20.json benchmark/benchmark.sh +``` + +## Environment variables + +### benchmark.sh + +| Var | Default | Purpose | +|-----|---------|---------| +| `BENCHMARK_PHASE` | `all` | `init` (build+init+configure), `start` (run node), `all` (both) | +| `SEI_HOME` | `$HOME/.sei` | Final chain data dir. Init uses a temp staging dir, then moves here | +| `PORT_OFFSET` | `0` | Added to all ports (RPC, P2P, pprof, gRPC, etc.) | +| `SEID_BIN` | `""` | Pre-built binary path. If set, skip build step | +| `LOG_FILE` | `""` | Redirect seid output to file | +| `BENCHMARK_CONFIG` | `$SCRIPT_DIR/scenarios/evm.json` | Scenario config file (absolute path resolved from script location) | +| `BENCHMARK_TXS_PER_BATCH` | `1000` | Transactions per batch | +| `GIGA_EXECUTOR` | `false` | Enable evmone-based EVM executor | +| `GIGA_OCC` | `false` | Enable OCC for Giga Executor | +| `DB_BACKEND` | `goleveldb` | Database backend (goleveldb, memdb, cleveldb, rocksdb) | +| `MOCK_BALANCES` | `true` | Use mock balances during benchmark | +| `DISABLE_INDEXER` | `true` | Disable indexer for benchmark (reduces I/O overhead) | +| `DEBUG` | `false` | Print all log output without filtering | +| `DURATION` | `120` | Auto-stop after N seconds (0 = run forever) | + +### benchmark-compare.sh + +Inherits all benchmark.sh vars via delegation. Additionally: + +| Var | Default | Purpose | +|-----|---------|---------| +| `DURATION` | `120` | How long (seconds) to run each node before stopping | +| `GIGA_EXECUTOR` | **`true`** | Overrides benchmark.sh default (false) | +| `GIGA_OCC` | **`true`** | Overrides benchmark.sh default (false) | +| `DB_BACKEND` | `goleveldb` | Forwarded to build and init phases | +| `RUN_ID` | `$$` (PID) | Namespaces `BASE_DIR` as `/tmp/sei-bench-${RUN_ID}/` | +| `RUN_PORT_OFFSET` | auto-claimed | Added to all per-scenario port offsets (auto-claimed via atomic `mkdir` slots) | + +**Note:** `GIGA_EXECUTOR` and `GIGA_OCC` default to `true` in the compare script but `false` in benchmark.sh. The compare script is designed for performance comparison where Giga Executor is typically enabled. + +## Parallel multi-scenario comparison + +Use `benchmark/benchmark-compare.sh` to run multiple git commits side-by-side (minimum 2 scenarios required): + +```bash +benchmark/benchmark-compare.sh \ + pre-opt=fd2e28d74 \ + lazy-cms=82acf458d \ + lazy-cms-fix=37a17fd02 +``` + +Each scenario gets its own binary, home dir, and port set (offset by 100). Results are printed at the end with median/avg/min/max TPS. Raw data in `/tmp/sei-bench-/