From ed71e70e0073f2458dff48476557950104299a4a Mon Sep 17 00:00:00 2001 From: pdrobnjak Date: Fri, 13 Feb 2026 15:19:41 +0100 Subject: [PATCH 1/5] docs(benchmark): add profiling analysis for executeEVMTxWithGigaExecutor Baseline: 8600 TPS median with GIGA_EXECUTOR=true GIGA_OCC=true. Key findings: - CacheMultiStore snapshot allocation is #1 target (15 GB, 27% of function) - cachekv.NewStore creates 9 GB of sync.Map objects per 30s sample - GC overhead (~24% CPU) is driven by allocation pressure - Lock contention (30.7% CPU) partially secondary to GC/alloc Candidate optimizations: sync.Pool for cachekv.Store, lazy per-store creation, replace sync.Map with regular map in giga path, cache block-level constants. Co-Authored-By: Claude Opus 4.6 --- .../analysis/executeEVMTxWithGigaExecutor.md | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+) create mode 100644 benchmark/analysis/executeEVMTxWithGigaExecutor.md diff --git a/benchmark/analysis/executeEVMTxWithGigaExecutor.md b/benchmark/analysis/executeEVMTxWithGigaExecutor.md new file mode 100644 index 0000000000..c8f8700ec6 --- /dev/null +++ b/benchmark/analysis/executeEVMTxWithGigaExecutor.md @@ -0,0 +1,157 @@ +# Profiling Analysis: executeEVMTxWithGigaExecutor + +**Date:** 2026-02-13 +**Branch:** pd/benchmark-profiling-improvements (commit dcbfd5a02) +**Scenario:** benchmark/scenarios/evm.json +**Config:** GIGA_EXECUTOR=true GIGA_OCC=true DURATION=120 + +## Baseline + +| Metric | Value | +|--------|-------| +| Median TPS | 8600 | +| Avg TPS | 8495 | +| Min TPS | 7800 | +| Max TPS | 9000 | +| Readings | 21 | +| Block Height | 995 | + +## Function Hot Path + +``` +executeEVMTxWithGigaExecutor (app/app.go:1714) + ├─ msg.AsTransaction() + ├─ RecoverSenderFromEthTx() — ECDSA recovery + ├─ GetEVMAddress() / AssociateAddresses() + ├─ NewDBImpl() — allocates state DB + initial Snapshot + │ └─ Snapshot() — clones entire CacheMultiStore + ├─ GetVMBlockContext() + ├─ GetParams() + ChainID() (x2) — redundant: ChainID called at lines 1721 and 1764 + ├─ NewGethExecutor() → vm.NewEVM() — new EVM per tx + ├─ ExecuteTransaction() + │ └─ ApplyMessage() → StateTransition.Execute() + │ └─ EVM.Call() — may trigger additional Snapshot() calls + ├─ stateDB.Finalize() — flushCtxs → cachekv.Store.Write + ├─ WriteReceipt() + └─ Marshal response (protobuf) +``` + +## CPU Profile (30s sample, 120.21s total across 4 cores) + +`executeEVMTxWithGigaExecutor`: **25.95s cumulative (21.6%)** + +| Component | Cumulative | % of total | Notes | +|-----------|-----------|------------|-------| +| runtime.lock2 (spinlock) | 36.87s | 30.7% | OCC goroutines fighting for locks | +| runtime.usleep (lock backoff) | 30.00s | 25.0% | Spinning on contended locks | +| ExecuteTransaction → ApplyMessage | 15.31s | 12.7% | Actual EVM execution | +| GC (gcDrain + scanobject) | 17.15s + 11.49s | 23.8% | Driven by allocation pressure | +| mallocgc | 11.42s | 9.5% | Object allocation | +| runtime.kevent | 10.38s | 8.6% | I/O polling | + +### CPU Focused on executeEVMTxWithGigaExecutor + +Top flat contributors within the function's call tree: + +| Function | Flat | Notes | +|----------|------|-------| +| runtime.usleep | 4.95s | Lock spinning inside store operations | +| runtime.cgocall | 3.09s | CGo boundary (likely crypto) | +| runtime.mallocgc | 6.31s | Allocation pressure from stores | +| runtime.newobject | 3.72s | Heap allocations | +| memiavl.MemNode.Get | 1.00s | IAVL tree reads | +| cachekv.Store.Write | 1.03s | Flushing cache stores | +| cachemulti.newCacheMultiStoreFromCMS | 1.22s | CMS creation during Snapshot | + +## Heap Profile (alloc_space) + +`executeEVMTxWithGigaExecutor`: **56.2 GB cumulative (54% of 104 GB total)** + +| Hotspot | Allocated | % of function | What | +|---------|----------|--------------|------| +| DBImpl.Snapshot → CacheMultiStore | 15.1 GB | 27% | Full cache store clone per snapshot | +| cachekv.NewStore | 9.0 GB | 16% | Individual KV store objects (sync.Map x2-3) | +| sync.newIndirectNode (sync.Map internals) | 7.8 GB | 14% | sync.Map internal trie nodes | +| cachekv.Store.Write | 8.4 GB | 15% | Flushing stores (map iteration + write) | +| DBImpl.Finalize → flushCtxs | 9.7 GB | 17% | Writing state changes back through layers | +| GetBalance → LockedCoins | 9.0 GB | 16% | Balance lookups triggering deep store reads | +| AccountKeeper.GetAccount | 7.2 GB | 13% | Account deserialization (protobuf UnpackAny) | +| scheduler.prepareTask | 8.8 GB | 16% | OCC task preparation (VersionIndexedStore) | + +### Allocation Objects + +| Hotspot | Objects | % of total | Notes | +|---------|---------|------------|-------| +| cachekv.NewStore | 157M | 10.2% | Largest single flat allocator | +| cachekv.Store.Write | 83M | 5.4% | Map iteration during flush | +| codec/types.UnpackAny | 134M | 8.7% | Protobuf deserialization | +| DBImpl.Snapshot | 137M | 8.9% | CMS + maps + sync primitives | + +## Mutex/Contention Profile (196s total) + +| Source | Time | % | What | +|--------|------|---|------| +| runtime.unlock | 159s | 81% | Runtime-level lock contention | +| sync.Mutex.Unlock | 29s | 15% | Application mutex contention | +| sync.RWMutex.Unlock | 19s | 10% | Reader-writer lock contention | +| AccAddress.String | 10.6s | 5% | Bech32 encoding under lock | +| EventManager.EmitEvents | 9.9s | 5% | Event emission contention | +| sync.Map.Store (HashTrieMap.Swap) | 6.9s | 4% | sync.Map write contention | + +## Block Profile (7.26 hrs total) + +Dominated by `runtime.selectgo` (6.97 hrs / 96%) — the OCC scheduler's `select` loop waiting for tasks. Not actionable for this function. + +## Key Findings + +### 1. CacheMultiStore allocation is the #1 optimization target + +Each `Snapshot()` call triggers `CacheMultiStore()` which: +- Materializes ALL lazy module stores (not just ones the tx touches) +- Creates `cachekv.NewStore` with 2-3 `sync.Map` objects per module store +- Creates `gigacachekv.NewStore` with 2 `sync.Map` objects per giga store +- Allocates map copies, sync.RWMutex, sync.Once per CMS + +This happens at minimum once per tx (NewDBImpl's initial Snapshot), plus additional times for nested EVM calls. OCC re-executions create entirely fresh chains. + +### 2. GC overhead is a direct consequence of allocation pressure + +17s of CPU on `gcDrain` + 11.5s on `scanobject` = ~24% of CPU spent on GC. The 104 GB of total allocations over 30s creates enormous GC pressure. Reducing allocations in the Snapshot path would have a compounding effect. + +### 3. Lock contention is high but may be secondary + +`runtime.lock2` at 30.7% is the #1 CPU consumer. Much of this is runtime-internal (GC, scheduler) and would decrease naturally if allocation pressure drops. Some is from `sync.Map` operations and store-level mutexes. + +### 4. ChainID() and DefaultChainConfig() called redundantly + +`ChainID()` is called at lines 1721 and 1764. `DefaultChainConfig()` is called inside `RecoverSenderFromEthTx` and again implicitly at line 1764. Minor but free to fix. + +## Candidate Optimizations + +### A. Pool/reuse cachekv.Store objects (high impact) + +Replace fresh `cachekv.NewStore` allocations with `sync.Pool` recycling. On return to pool, call `sync.Map.Clear()` (Go 1.23+) to reset state. Eliminates ~9 GB of allocations + reduces GC. + +**Expected impact:** 10-20% TPS improvement +**Risk:** Low — mechanical change, clear lifecycle boundaries + +### B. Lazy per-store creation in snapshots (high impact) + +Currently `materializeOnce.Do` creates cachekv.Store for ALL module stores when any snapshot is taken. Instead, create wrappers only for stores actually accessed via GetKVStore. + +**Expected impact:** 10-15% TPS improvement (depends on how many stores are unused per tx) +**Risk:** Medium — changes cachemulti.Store threading model + +### C. Replace sync.Map with regular map in giga cachekv (medium impact) + +The giga `cachekv.Store` uses `sync.Map` but within OCC, each store belongs to a single goroutine's execution. Regular maps are ~10x cheaper to allocate and access. + +**Expected impact:** 5-10% TPS improvement +**Risk:** Low — need to verify no concurrent access in OCC path + +### D. Cache block-level constants per-tx (low impact) + +Cache `ChainID()`, `DefaultChainConfig()`, `GetParams()`, `EthereumConfigWithSstore()` at block level instead of computing per-tx. + +**Expected impact:** 2-5% TPS improvement +**Risk:** Minimal From 3d721472dcdec80aab0caa94a1eb3b1e27c5d394 Mon Sep 17 00:00:00 2001 From: pdrobnjak Date: Fri, 13 Feb 2026 15:47:40 +0100 Subject: [PATCH 2/5] perf: pool cachekv.Store to reduce allocation pressure Replace per-tx cachekv.Store allocation with sync.Pool recycling for both standard and giga variants. Add CacheMultiStore.Release() and ReleaseDB() to return stores to pools at lifecycle boundaries (Cleanup, RevertToSnapshot, CleanupForTracer). Release replaced stores in SetKVStores/SetGigaKVStores and unused db store in OCC scheduler. Reset() replaces sync.Map fields with fresh instances (not Clear(), which is slower due to internal trie node walking and causes more allocations when repopulated). Targeting the #1 flat allocator from profiling: cachekv.NewStore at 9 GB / 157M objects over 30s at 8600 TPS. Co-Authored-By: Claude Opus 4.6 --- giga/deps/store/cachekv.go | 35 ++++++++++++++++---- giga/deps/tasks/scheduler.go | 5 +++ giga/deps/xevm/state/state.go | 12 +++++++ giga/deps/xevm/state/statedb.go | 16 +++++++++ sei-cosmos/store/cachekv/store.go | 40 ++++++++++++++++++----- sei-cosmos/store/cachemulti/store.go | 49 ++++++++++++++++++++++++++-- x/evm/state/state.go | 12 +++++++ x/evm/state/statedb.go | 16 +++++++++ 8 files changed, 167 insertions(+), 18 deletions(-) diff --git a/giga/deps/store/cachekv.go b/giga/deps/store/cachekv.go index 631ee2cd2d..57a0339825 100644 --- a/giga/deps/store/cachekv.go +++ b/giga/deps/store/cachekv.go @@ -22,15 +22,36 @@ type Store struct { var _ types.CacheKVStore = (*Store)(nil) +var storePool = sync.Pool{ + New: func() any { + return &Store{ + cache: &sync.Map{}, + deleted: &sync.Map{}, + } + }, +} + // NewStore creates a new Store object func NewStore(parent types.KVStore, storeKey types.StoreKey, cacheSize int) *Store { - return &Store{ - cache: &sync.Map{}, - deleted: &sync.Map{}, - parent: parent, - storeKey: storeKey, - cacheSize: cacheSize, - } + s := storePool.Get().(*Store) + s.parent = parent + s.storeKey = storeKey + s.cacheSize = cacheSize + return s +} + +// Reset clears all cached state, making the store ready for reuse. +func (store *Store) Reset() { + store.cache = &sync.Map{} + store.deleted = &sync.Map{} + store.parent = nil + store.storeKey = nil +} + +// Release resets the store and returns it to the pool. +func (store *Store) Release() { + store.Reset() + storePool.Put(store) } func (store *Store) GetWorkingHash() ([]byte, error) { diff --git a/giga/deps/tasks/scheduler.go b/giga/deps/tasks/scheduler.go index 16c97e3fdb..8b1291f479 100644 --- a/giga/deps/tasks/scheduler.go +++ b/giga/deps/tasks/scheduler.go @@ -508,6 +508,11 @@ func (s *scheduler) prepareTask(task *deliverTxTask) { return vs[k] }) + // Release the db store since OCC scheduler doesn't use it + if r, ok := ms.(interface{ ReleaseDB() }); ok { + r.ReleaseDB() + } + ctx = ctx.WithMultiStore(ms) } diff --git a/giga/deps/xevm/state/state.go b/giga/deps/xevm/state/state.go index e57e962221..5422bb0a39 100644 --- a/giga/deps/xevm/state/state.go +++ b/giga/deps/xevm/state/state.go @@ -116,6 +116,18 @@ func (s *DBImpl) RevertToSnapshot(rev int) { panic("invalid revision number") } + // Release current ctx's CMS (being abandoned) + type releasable interface{ Release() } + if r, ok := s.ctx.MultiStore().(releasable); ok { + r.Release() + } + // Release abandoned snapshots (rev+1..end), but not rev (becomes new ctx) + for i := len(s.snapshottedCtxs) - 1; i > rev; i-- { + if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { + r.Release() + } + } + s.ctx = s.snapshottedCtxs[rev] s.snapshottedCtxs = s.snapshottedCtxs[:rev] diff --git a/giga/deps/xevm/state/statedb.go b/giga/deps/xevm/state/statedb.go index eb95116d05..d6b9c1d0d2 100644 --- a/giga/deps/xevm/state/statedb.go +++ b/giga/deps/xevm/state/statedb.go @@ -80,13 +80,29 @@ func (s *DBImpl) SetEVM(evm *vm.EVM) {} func (s *DBImpl) AddPreimage(_ common.Hash, _ []byte) {} func (s *DBImpl) Cleanup() { + s.releaseIntermediateStores() s.tempState = nil s.logger = nil s.snapshottedCtxs = nil } +// releaseIntermediateStores returns cachekv stores from intermediate CMS snapshots +// back to their pools. Never releases snapshottedCtxs[0] — it belongs to the caller. +func (s *DBImpl) releaseIntermediateStores() { + type releasable interface{ Release() } + if r, ok := s.ctx.MultiStore().(releasable); ok { + r.Release() + } + for i := len(s.snapshottedCtxs) - 1; i > 0; i-- { + if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { + r.Release() + } + } +} + func (s *DBImpl) CleanupForTracer() { s.flushCtxs() + s.releaseIntermediateStores() if len(s.snapshottedCtxs) > 0 { s.ctx = s.snapshottedCtxs[0] } diff --git a/sei-cosmos/store/cachekv/store.go b/sei-cosmos/store/cachekv/store.go index 51d583d2e6..9dbff1ba5b 100644 --- a/sei-cosmos/store/cachekv/store.go +++ b/sei-cosmos/store/cachekv/store.go @@ -27,17 +27,39 @@ type Store struct { var _ types.CacheKVStore = (*Store)(nil) +var storePool = sync.Pool{ + New: func() any { + return &Store{ + cache: &sync.Map{}, + deleted: &sync.Map{}, + unsortedCache: &sync.Map{}, + } + }, +} + // NewStore creates a new Store object func NewStore(parent types.KVStore, storeKey types.StoreKey, cacheSize int) *Store { - return &Store{ - cache: &sync.Map{}, - deleted: &sync.Map{}, - unsortedCache: &sync.Map{}, - sortedCache: nil, - parent: parent, - storeKey: storeKey, - cacheSize: cacheSize, - } + s := storePool.Get().(*Store) + s.parent = parent + s.storeKey = storeKey + s.cacheSize = cacheSize + return s +} + +// Reset clears all cached state, making the store ready for reuse. +func (store *Store) Reset() { + store.cache = &sync.Map{} + store.deleted = &sync.Map{} + store.unsortedCache = &sync.Map{} + store.sortedCache = nil + store.parent = nil + store.storeKey = nil +} + +// Release resets the store and returns it to the pool. +func (store *Store) Release() { + store.Reset() + storePool.Put(store) } func (store *Store) GetWorkingHash() ([]byte, error) { diff --git a/sei-cosmos/store/cachemulti/store.go b/sei-cosmos/store/cachemulti/store.go index eb4f8bce05..ab5f463d4e 100644 --- a/sei-cosmos/store/cachemulti/store.go +++ b/sei-cosmos/store/cachemulti/store.go @@ -281,21 +281,66 @@ func (cms Store) StoreKeys() []types.StoreKey { return keys } +// Release returns all cachekv stores to their pools. +func (cms Store) Release() { + type releasable interface{ Release() } + if r, ok := cms.db.(releasable); ok { + r.Release() + } + for k, s := range cms.stores { + if r, ok := s.(releasable); ok { + r.Release() + } + delete(cms.stores, k) + } + for k, s := range cms.gigaStores { + if r, ok := s.(releasable); ok { + r.Release() + } + delete(cms.gigaStores, k) + } + for k := range cms.parents { + delete(cms.parents, k) + } +} + +// ReleaseDB releases the db cachekv store back to its pool. +func (cms Store) ReleaseDB() { + type releasable interface{ Release() } + if r, ok := cms.db.(releasable); ok { + r.Release() + } +} + // SetKVStores sets the underlying KVStores via a handler for each key func (cms Store) SetKVStores(handler func(sk types.StoreKey, s types.KVStore) types.CacheWrap) types.MultiStore { // Force-create any lazy stores for k := range cms.parents { cms.getOrCreateStore(k) } + type releasable interface{ Release() } for k, s := range cms.stores { - cms.stores[k] = handler(k, s.(types.KVStore)) + newStore := handler(k, s.(types.KVStore)) + if newStore != s { + if r, ok := s.(releasable); ok { + r.Release() + } + } + cms.stores[k] = newStore } return cms } func (cms Store) SetGigaKVStores(handler func(sk types.StoreKey, s types.KVStore) types.KVStore) types.MultiStore { + type releasable interface{ Release() } for k, s := range cms.gigaStores { - cms.gigaStores[k] = handler(k, s) + newStore := handler(k, s) + if newStore != s { + if r, ok := s.(releasable); ok { + r.Release() + } + } + cms.gigaStores[k] = newStore } return cms } diff --git a/x/evm/state/state.go b/x/evm/state/state.go index eeb64a13e5..5e5a001aec 100644 --- a/x/evm/state/state.go +++ b/x/evm/state/state.go @@ -121,6 +121,18 @@ func (s *DBImpl) RevertToSnapshot(rev int) { panic("invalid revision number") } + // Release current ctx's CMS (being abandoned) + type releasable interface{ Release() } + if r, ok := s.ctx.MultiStore().(releasable); ok { + r.Release() + } + // Release abandoned snapshots (rev+1..end), but not rev (becomes new ctx) + for i := len(s.snapshottedCtxs) - 1; i > rev; i-- { + if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { + r.Release() + } + } + s.ctx = s.snapshottedCtxs[rev] s.snapshottedCtxs = s.snapshottedCtxs[:rev] diff --git a/x/evm/state/statedb.go b/x/evm/state/statedb.go index bf5a9b454e..446b7ae77b 100644 --- a/x/evm/state/statedb.go +++ b/x/evm/state/statedb.go @@ -80,13 +80,29 @@ func (s *DBImpl) SetEVM(evm *vm.EVM) {} func (s *DBImpl) AddPreimage(_ common.Hash, _ []byte) {} func (s *DBImpl) Cleanup() { + s.releaseIntermediateStores() s.tempState = nil s.logger = nil s.snapshottedCtxs = nil } +// releaseIntermediateStores returns cachekv stores from intermediate CMS snapshots +// back to their pools. Never releases snapshottedCtxs[0] — it belongs to the caller. +func (s *DBImpl) releaseIntermediateStores() { + type releasable interface{ Release() } + if r, ok := s.ctx.MultiStore().(releasable); ok { + r.Release() + } + for i := len(s.snapshottedCtxs) - 1; i > 0; i-- { + if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { + r.Release() + } + } +} + func (s *DBImpl) CleanupForTracer() { s.flushCtxs() + s.releaseIntermediateStores() if len(s.snapshottedCtxs) > 0 { s.ctx = s.snapshottedCtxs[0] } From 1c4d1a9d16ed307c3f48c923c4848893448c876a Mon Sep 17 00:00:00 2001 From: pdrobnjak Date: Fri, 13 Feb 2026 16:10:54 +0100 Subject: [PATCH 3/5] Revert "perf: pool cachekv.Store to reduce allocation pressure" This reverts commit dbd6ad52ff10e003bed8ba2b5744f39b867fb876. --- giga/deps/store/cachekv.go | 35 ++++---------------- giga/deps/tasks/scheduler.go | 5 --- giga/deps/xevm/state/state.go | 12 ------- giga/deps/xevm/state/statedb.go | 16 --------- sei-cosmos/store/cachekv/store.go | 40 +++++------------------ sei-cosmos/store/cachemulti/store.go | 49 ++-------------------------- x/evm/state/state.go | 12 ------- x/evm/state/statedb.go | 16 --------- 8 files changed, 18 insertions(+), 167 deletions(-) diff --git a/giga/deps/store/cachekv.go b/giga/deps/store/cachekv.go index 57a0339825..631ee2cd2d 100644 --- a/giga/deps/store/cachekv.go +++ b/giga/deps/store/cachekv.go @@ -22,36 +22,15 @@ type Store struct { var _ types.CacheKVStore = (*Store)(nil) -var storePool = sync.Pool{ - New: func() any { - return &Store{ - cache: &sync.Map{}, - deleted: &sync.Map{}, - } - }, -} - // NewStore creates a new Store object func NewStore(parent types.KVStore, storeKey types.StoreKey, cacheSize int) *Store { - s := storePool.Get().(*Store) - s.parent = parent - s.storeKey = storeKey - s.cacheSize = cacheSize - return s -} - -// Reset clears all cached state, making the store ready for reuse. -func (store *Store) Reset() { - store.cache = &sync.Map{} - store.deleted = &sync.Map{} - store.parent = nil - store.storeKey = nil -} - -// Release resets the store and returns it to the pool. -func (store *Store) Release() { - store.Reset() - storePool.Put(store) + return &Store{ + cache: &sync.Map{}, + deleted: &sync.Map{}, + parent: parent, + storeKey: storeKey, + cacheSize: cacheSize, + } } func (store *Store) GetWorkingHash() ([]byte, error) { diff --git a/giga/deps/tasks/scheduler.go b/giga/deps/tasks/scheduler.go index 8b1291f479..16c97e3fdb 100644 --- a/giga/deps/tasks/scheduler.go +++ b/giga/deps/tasks/scheduler.go @@ -508,11 +508,6 @@ func (s *scheduler) prepareTask(task *deliverTxTask) { return vs[k] }) - // Release the db store since OCC scheduler doesn't use it - if r, ok := ms.(interface{ ReleaseDB() }); ok { - r.ReleaseDB() - } - ctx = ctx.WithMultiStore(ms) } diff --git a/giga/deps/xevm/state/state.go b/giga/deps/xevm/state/state.go index 5422bb0a39..e57e962221 100644 --- a/giga/deps/xevm/state/state.go +++ b/giga/deps/xevm/state/state.go @@ -116,18 +116,6 @@ func (s *DBImpl) RevertToSnapshot(rev int) { panic("invalid revision number") } - // Release current ctx's CMS (being abandoned) - type releasable interface{ Release() } - if r, ok := s.ctx.MultiStore().(releasable); ok { - r.Release() - } - // Release abandoned snapshots (rev+1..end), but not rev (becomes new ctx) - for i := len(s.snapshottedCtxs) - 1; i > rev; i-- { - if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { - r.Release() - } - } - s.ctx = s.snapshottedCtxs[rev] s.snapshottedCtxs = s.snapshottedCtxs[:rev] diff --git a/giga/deps/xevm/state/statedb.go b/giga/deps/xevm/state/statedb.go index d6b9c1d0d2..eb95116d05 100644 --- a/giga/deps/xevm/state/statedb.go +++ b/giga/deps/xevm/state/statedb.go @@ -80,29 +80,13 @@ func (s *DBImpl) SetEVM(evm *vm.EVM) {} func (s *DBImpl) AddPreimage(_ common.Hash, _ []byte) {} func (s *DBImpl) Cleanup() { - s.releaseIntermediateStores() s.tempState = nil s.logger = nil s.snapshottedCtxs = nil } -// releaseIntermediateStores returns cachekv stores from intermediate CMS snapshots -// back to their pools. Never releases snapshottedCtxs[0] — it belongs to the caller. -func (s *DBImpl) releaseIntermediateStores() { - type releasable interface{ Release() } - if r, ok := s.ctx.MultiStore().(releasable); ok { - r.Release() - } - for i := len(s.snapshottedCtxs) - 1; i > 0; i-- { - if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { - r.Release() - } - } -} - func (s *DBImpl) CleanupForTracer() { s.flushCtxs() - s.releaseIntermediateStores() if len(s.snapshottedCtxs) > 0 { s.ctx = s.snapshottedCtxs[0] } diff --git a/sei-cosmos/store/cachekv/store.go b/sei-cosmos/store/cachekv/store.go index 9dbff1ba5b..51d583d2e6 100644 --- a/sei-cosmos/store/cachekv/store.go +++ b/sei-cosmos/store/cachekv/store.go @@ -27,39 +27,17 @@ type Store struct { var _ types.CacheKVStore = (*Store)(nil) -var storePool = sync.Pool{ - New: func() any { - return &Store{ - cache: &sync.Map{}, - deleted: &sync.Map{}, - unsortedCache: &sync.Map{}, - } - }, -} - // NewStore creates a new Store object func NewStore(parent types.KVStore, storeKey types.StoreKey, cacheSize int) *Store { - s := storePool.Get().(*Store) - s.parent = parent - s.storeKey = storeKey - s.cacheSize = cacheSize - return s -} - -// Reset clears all cached state, making the store ready for reuse. -func (store *Store) Reset() { - store.cache = &sync.Map{} - store.deleted = &sync.Map{} - store.unsortedCache = &sync.Map{} - store.sortedCache = nil - store.parent = nil - store.storeKey = nil -} - -// Release resets the store and returns it to the pool. -func (store *Store) Release() { - store.Reset() - storePool.Put(store) + return &Store{ + cache: &sync.Map{}, + deleted: &sync.Map{}, + unsortedCache: &sync.Map{}, + sortedCache: nil, + parent: parent, + storeKey: storeKey, + cacheSize: cacheSize, + } } func (store *Store) GetWorkingHash() ([]byte, error) { diff --git a/sei-cosmos/store/cachemulti/store.go b/sei-cosmos/store/cachemulti/store.go index ab5f463d4e..eb4f8bce05 100644 --- a/sei-cosmos/store/cachemulti/store.go +++ b/sei-cosmos/store/cachemulti/store.go @@ -281,66 +281,21 @@ func (cms Store) StoreKeys() []types.StoreKey { return keys } -// Release returns all cachekv stores to their pools. -func (cms Store) Release() { - type releasable interface{ Release() } - if r, ok := cms.db.(releasable); ok { - r.Release() - } - for k, s := range cms.stores { - if r, ok := s.(releasable); ok { - r.Release() - } - delete(cms.stores, k) - } - for k, s := range cms.gigaStores { - if r, ok := s.(releasable); ok { - r.Release() - } - delete(cms.gigaStores, k) - } - for k := range cms.parents { - delete(cms.parents, k) - } -} - -// ReleaseDB releases the db cachekv store back to its pool. -func (cms Store) ReleaseDB() { - type releasable interface{ Release() } - if r, ok := cms.db.(releasable); ok { - r.Release() - } -} - // SetKVStores sets the underlying KVStores via a handler for each key func (cms Store) SetKVStores(handler func(sk types.StoreKey, s types.KVStore) types.CacheWrap) types.MultiStore { // Force-create any lazy stores for k := range cms.parents { cms.getOrCreateStore(k) } - type releasable interface{ Release() } for k, s := range cms.stores { - newStore := handler(k, s.(types.KVStore)) - if newStore != s { - if r, ok := s.(releasable); ok { - r.Release() - } - } - cms.stores[k] = newStore + cms.stores[k] = handler(k, s.(types.KVStore)) } return cms } func (cms Store) SetGigaKVStores(handler func(sk types.StoreKey, s types.KVStore) types.KVStore) types.MultiStore { - type releasable interface{ Release() } for k, s := range cms.gigaStores { - newStore := handler(k, s) - if newStore != s { - if r, ok := s.(releasable); ok { - r.Release() - } - } - cms.gigaStores[k] = newStore + cms.gigaStores[k] = handler(k, s) } return cms } diff --git a/x/evm/state/state.go b/x/evm/state/state.go index 5e5a001aec..eeb64a13e5 100644 --- a/x/evm/state/state.go +++ b/x/evm/state/state.go @@ -121,18 +121,6 @@ func (s *DBImpl) RevertToSnapshot(rev int) { panic("invalid revision number") } - // Release current ctx's CMS (being abandoned) - type releasable interface{ Release() } - if r, ok := s.ctx.MultiStore().(releasable); ok { - r.Release() - } - // Release abandoned snapshots (rev+1..end), but not rev (becomes new ctx) - for i := len(s.snapshottedCtxs) - 1; i > rev; i-- { - if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { - r.Release() - } - } - s.ctx = s.snapshottedCtxs[rev] s.snapshottedCtxs = s.snapshottedCtxs[:rev] diff --git a/x/evm/state/statedb.go b/x/evm/state/statedb.go index 446b7ae77b..bf5a9b454e 100644 --- a/x/evm/state/statedb.go +++ b/x/evm/state/statedb.go @@ -80,29 +80,13 @@ func (s *DBImpl) SetEVM(evm *vm.EVM) {} func (s *DBImpl) AddPreimage(_ common.Hash, _ []byte) {} func (s *DBImpl) Cleanup() { - s.releaseIntermediateStores() s.tempState = nil s.logger = nil s.snapshottedCtxs = nil } -// releaseIntermediateStores returns cachekv stores from intermediate CMS snapshots -// back to their pools. Never releases snapshottedCtxs[0] — it belongs to the caller. -func (s *DBImpl) releaseIntermediateStores() { - type releasable interface{ Release() } - if r, ok := s.ctx.MultiStore().(releasable); ok { - r.Release() - } - for i := len(s.snapshottedCtxs) - 1; i > 0; i-- { - if r, ok := s.snapshottedCtxs[i].MultiStore().(releasable); ok { - r.Release() - } - } -} - func (s *DBImpl) CleanupForTracer() { s.flushCtxs() - s.releaseIntermediateStores() if len(s.snapshottedCtxs) > 0 { s.ctx = s.snapshottedCtxs[0] } From 9de4e640ba4c4379f7a35d1abb7f10590f002566 Mon Sep 17 00:00:00 2001 From: pdrobnjak Date: Fri, 13 Feb 2026 16:17:31 +0100 Subject: [PATCH 4/5] Move CLAUDE.md contents to AGENTS.md for multi-agent compatibility CLAUDE.md is only read by Claude Code, while AGENTS.md is recognized by multiple agentic coding tools. Each CLAUDE.md now references its co-located AGENTS.md so the context is available to all agents. Co-Authored-By: Claude Opus 4.6 --- AGENTS.md | 232 +++++++++++++++++++++++++++++++++++++++++++ CLAUDE.md | 24 +---- benchmark/AGENTS.md | 188 +++++++++++++++++++++++++++++++++++ benchmark/CLAUDE.md | 189 +---------------------------------- giga/tests/AGENTS.md | 190 +++++++++++++++++++++++++++++++++++ giga/tests/CLAUDE.md | 191 +---------------------------------- 6 files changed, 613 insertions(+), 401 deletions(-) create mode 100644 AGENTS.md create mode 100644 benchmark/AGENTS.md create mode 100644 giga/tests/AGENTS.md diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000000..f8591ba5db --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,232 @@ +# AGENTS - Sei Chain (Monorepo) + +This document is the primary orientation for automated agents operating in +`/Users/pdrobnjak/sei/sei-chain`. + +It reflects the repository as discovered in this working copy and is intended for +Go-first workflows. + +## Scope and project shape + +- This repo is a Go workspace defined in `go.work` with these active modules: + - `.` (main `sei-chain` module) + - `./sei-cosmos` +- Additional sibling modules include: + - `./sei-tendermint` + - `./sei-ibc-go` + - `./sei-wasmd` + - `./sei-wasmvm` + - `./sei-db` + - `./oracle/price-feeder` + - `./sei-iavl` +- Do not assume one universal command for all modules; treat each module as having + its own `Makefile` contract where noted. +- Go toolchain expectations are documented in `go.mod`/`go.work` files as + `go 1.25.6` for main and workspace modules. +- This document aligns with the root `CLAUDE.md` guidance and repository-specific + operational notes. + +## Build / lint / test command matrix + +### Root module (`sei-chain`) + +- `make install` + - Installs `./cmd/seid` using module build tags and `-ldflags` from root + `Makefile`. +- `make build` + - Build binary to `./build/seid`. +- `make install-mock-balances`, `make install-bench`, + `make install-with-race-detector` + - Alternate build variants used in specific CI/dev workflows. +- `make lint` + - Runs `golangci-lint`, gofmt check (`gofmt -d -s`) and `go mod verify`. +- `make test-group-` + - Uses `NUM_SPLIT` (default 4) and package bucketing. + - Example: `make test-group-0` or `NUM_SPLIT=8 make test-group-3`. +- `make clean` + - Removes `./build` artifacts. + +### Submodule commands that are routinely used + +- `make -C sei-cosmos test` (standard) +- `make -C sei-cosmos test-all` +- `make -C sei-cosmos lint` +- `make -C sei-cosmos test-unit` +- `make -C sei-cosmos test-race` +- `make -C sei-cosmos test-cover` + +- `make -C sei-ibc-go test` +- `make -C sei-ibc-go test-all` + +- `make -C sei-wasmd test` (this target delegates to `test-unit`) +- `make -C sei-wasmd lint` +- `make -C sei-wasmd test-cover` + +- `make -C sei-tendermint test` +- `make -C sei-tendermint test-race` + +- `make -C sei-db test-all` +- `make -C sei-db lint-all` + +- `make -C oracle/price-feeder test-unit` +- `make -C oracle/price-feeder lint` + +- `make -C sei-iavl test` + +### CI-parity baseline commands + +These are the command patterns currently used in GitHub workflows: + +- `go test -race -tags='ledger test_ledger_mock' -timeout=30m ./...` + (root + sei-cosmos modules in CI). +- `go test -tags='ledger test_ledger_mock' -timeout=30m -covermode=atomic -coverprofile=coverage.out -coverpkg=./... ./...` + (coverage jobs). +- `go test -mod=readonly` appears throughout module makefiles; prefer this flag for + CI-grade test and lint parity. +- `go mod verify` runs in lint/verification flows and should not be skipped when + checking dependency integrity. + +### Single-test and focused test commands + +Use these patterns for quick iteration: + +- Single test in a package: + - `go test -mod=readonly ./app -run TestStateMachine -count=1` +- Single test with tags used by this repo: + - `go test -mod=readonly -tags='ledger test_ledger_mock' ./app -run TestStateMachine -count=1` +- Single package with race detector: + - `go test -mod=readonly -race ./app -run TestStateMachine -count=1` +- Single test file style filter (regexp): + - `go test ./x/oracle/... -run '^Test.*Price.*$'` +- Single module focus example: + - `go test -mod=readonly -tags='cgo ledger test_ledger_mock' ./...` + (when run from `sei-cosmos` or `sei-ibc-go`). + +If your change affects a single module, prefer running the command in that module's +directory to avoid long cross-module test suites. + +## Required formatting and import style + +- Always keep code formatted with standard gofmt (simplify mode required). +- All Go files must be gofmt compliant. After editing any `.go` file: + - `gofmt -s -w ` +- Quick check: + - `gofmt -s -l .` +- Before commit, run style checks used by project lint: + - `gofmt -w -s` style via make targets, and `goimports` where configured. +- Use import grouping with stdlib first, a blank line, then third-party/internal. +- Keep local import aliases consistent and avoid unnecessary aliases unless naming + collisions require them. +- Preserve generated file exemptions noted in make rules (for example proto or statik + artifacts) unless the change explicitly regenerates them. +- For deterministic local formatting in Go files, rely on tooling instead of manual + alignment. + +## Naming and type conventions + +- Exported identifiers: + - Use `CamelCase` / `PascalCase`. + - Keep acronyms in conventional block style (`ID`, `URL`, `JSON`) unless codebase + has a local historical exception. +- Unexported identifiers: + - Use `camelCase`. +- Types and function names should describe domain intent, not implementation detail. +- Prefer concrete types over `interface{}` at package boundaries. +- Use enums / constants for stringly-typed domains when possible. +- Favor struct field names that mirror domain semantics (`WindowSize`, `MaxItems`, etc.) + and are self-documenting without excessive comments. +- Keep receiver names short and consistent per type (`lg`, `cfg`, `app`, `tm`). +- Prefer constructor functions (`NewXxx`) that return immutable/configured values and + validation errors early. + +## Error handling conventions + +- Return `error` values instead of panicking for expected runtime failures. +- Wrap context with `%w` when adding call-site context so call stacks remain + inspectable with `errors.Is`/`errors.As`. +- Check and branch on sentinel errors where needed (for example `errors.Is(err, + ErrX)`). +- Use `errors.New` for static messages and `fmt.Errorf("...: %w", err)` for + wrapped propagation. +- Keep early returns readable: + - validate inputs first, then execute happy path. +- Do not suppress errors in deferred blocks unless explicitly documented and + intentionally transformed. + +## Testing conventions + +- Table-driven tests are preferred for behavior matrices. +- Use helpers marked with `t.Helper()`. +- For one-off deterministic failures, use `t.Fatalf` / `t.Errorf` with concise + context. +- The codebase commonly uses `require` and `assert` from Testify; use them + consistently in new tests. +- Add focused `-run` regex coverage for single regression checks during development. +- Include race coverage on concurrent code paths when feasible (`-race`). +- Avoid flaky wall-clock-based assertions; use deterministic clocks or fake time + helpers where available. + +## Test and benchmark hygiene + +- Prefer running test suites with short timeouts for focused local runs, + and increase for simulation/integration paths. +- For long-running or expensive suites, mark with package-appropriate tags and + document why. +- Keep benchmark changes isolated; use dedicated `benchmark` targets where present. +- Treat integration/e2e commands as higher-cost and run sparingly in local + developer flow unless specifically touching integration surfaces. + +## Linting details and practical guidance + +- `golangci-lint` is the primary static checker. +- `line length`, `complexity`, and deep lints are intentionally controlled by + repo config; follow existing file patterns if introducing logic that might trigger + `prealloc`, `ineffassign`, `errcheck`, `govet`, `staticcheck` and `gosec`. +- If lint reports style-only import order/formatting drift, run the formatter and + rerun lint before pushing. +- `make lint` in module makefiles often includes both tool install/run and format + validation; if it fails on newly modified files, re-run just formatting first + then rerun lint. + +## Dependency and module hygiene + +- Keep module boundaries explicit and run module-local checks with `-mod=readonly`. +- Use `go mod verify` to validate module cache integrity when touching + dependency-sensitive code paths. +- Do not edit dependency files casually; rely on standard `go` workflows. + +## Files and directories commonly edited with caution + +- Avoid manual edits in generated/compiled output directories + (protobuf, statik, vendored-like generated assets) unless regeneration is the + explicit change request. +- Favor minimal diff footprints in ABI/codec-related files. +- Keep test fixtures deterministic and preferably immutable once committed. + +## Cursor/Copilot rules + +- No `.cursor/rules/`, `.cursorrules`, or `.github/copilot-instructions.md` + files were found in this repository. +- If that changes in the future, update this document before large agentic runs. + +## Quick pre-change checklist for agents + +1. Identify target module(s) from changed paths. +2. Run module-appropriate format/lint commands after edits. +3. Run focused tests first; then expand to module-wide `test`/`test-all` as needed. +4. Add/adjust single-test run commands when narrowing regressions. +5. Re-check `go mod`/dependency integrity if modules are changed. + +## File references used for this guide + +- `Makefile` +- `sei-cosmos/Makefile` +- `sei-ibc-go/Makefile` +- `sei-wasmd/Makefile` +- `sei-db/Makefile` +- `oracle/price-feeder/Makefile` +- `.golangci.yml` +- `go.mod`, `go.work` +- `.github/workflows/go-test.yml` +- `.github/workflows/go-test-coverage.yml` +- `README.md` diff --git a/CLAUDE.md b/CLAUDE.md index 6a19a87161..971eee93cb 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,23 +1 @@ -# Sei Chain - -## Code Style - -### Go Formatting - -All Go files must be `gofmt` compliant. After modifying any `.go` files, run: - -```bash -gofmt -s -w -``` - -Or verify compliance with: - -```bash -gofmt -s -l . -``` - -This command should produce no output if all files are properly formatted. - -## Benchmarking - -See [benchmark/CLAUDE.md](benchmark/CLAUDE.md) for benchmark usage, environment variables, and comparison workflows. +See [AGENTS.md](AGENTS.md) for agent instructions. diff --git a/benchmark/AGENTS.md b/benchmark/AGENTS.md new file mode 100644 index 0000000000..a4da4358be --- /dev/null +++ b/benchmark/AGENTS.md @@ -0,0 +1,188 @@ +# Benchmark + +## Single scenario + +```bash +GIGA_EXECUTOR=true GIGA_OCC=true benchmark/benchmark.sh +``` + +By default, the benchmark runs for `DURATION=120` seconds, auto-captures all 6 profile types, extracts TPS stats, and exits. Profiles are saved to `/tmp/sei-bench/pprof/`, TPS data to `/tmp/sei-bench/tps.txt`, and the full log to `/tmp/sei-bench/output.log`. + +Use `DURATION=0` to run forever (manual capture, original behavior). + +TPS is logged every 5s as `tps=` (with ANSI color codes). For manual extraction: + +```bash +sed 's/\x1b\[[0-9;]*m//g' /tmp/sei-bench/output.log | sed -n 's/.*tps=\([0-9.]*\).*/\1/p' +``` + +Available scenarios in `benchmark/scenarios/`: `evm.json` (default), `erc20.json`, `mixed.json`, `default.json`. + +```bash +# Use a different scenario +BENCHMARK_CONFIG=benchmark/scenarios/erc20.json benchmark/benchmark.sh +``` + +## Environment variables + +### benchmark.sh + +| Var | Default | Purpose | +|-----|---------|---------| +| `BENCHMARK_PHASE` | `all` | `init` (build+init+configure), `start` (run node), `all` (both) | +| `SEI_HOME` | `$HOME/.sei` | Final chain data dir. If != ~/.sei, init in ~/.sei then `mv` | +| `PORT_OFFSET` | `0` | Added to all ports (RPC, P2P, pprof, gRPC, etc.) | +| `SEID_BIN` | `""` | Pre-built binary path. If set, skip build + copy to ~/go/bin/seid | +| `LOG_FILE` | `""` | Redirect seid output to file | +| `BENCHMARK_CONFIG` | `$SCRIPT_DIR/scenarios/evm.json` | Scenario config file (absolute path resolved from script location) | +| `BENCHMARK_TXS_PER_BATCH` | `1000` | Transactions per batch | +| `GIGA_EXECUTOR` | `false` | Enable evmone-based EVM executor | +| `GIGA_OCC` | `false` | Enable OCC for Giga Executor | +| `DB_BACKEND` | `goleveldb` | Database backend (goleveldb, memdb, cleveldb, rocksdb) | +| `MOCK_BALANCES` | `true` | Use mock balances during benchmark | +| `DISABLE_INDEXER` | `true` | Disable indexer for benchmark (reduces I/O overhead) | +| `DEBUG` | `false` | Print all log output without filtering | +| `DURATION` | `120` | Auto-stop after N seconds (0 = run forever) | + +### benchmark-compare.sh + +Inherits all benchmark.sh vars via delegation. Additionally: + +| Var | Default | Purpose | +|-----|---------|---------| +| `DURATION` | `120` | How long (seconds) to run each node before stopping | +| `GIGA_EXECUTOR` | **`true`** | Overrides benchmark.sh default (false) | +| `GIGA_OCC` | **`true`** | Overrides benchmark.sh default (false) | +| `DB_BACKEND` | `goleveldb` | Forwarded to build and init phases | + +**Note:** `GIGA_EXECUTOR` and `GIGA_OCC` default to `true` in the compare script but `false` in benchmark.sh. The compare script is designed for performance comparison where Giga Executor is typically enabled. + +## Parallel multi-scenario comparison + +Use `benchmark/benchmark-compare.sh` to run multiple git commits side-by-side (minimum 2 scenarios required): + +```bash +benchmark/benchmark-compare.sh \ + pre-opt=fd2e28d74 \ + lazy-cms=82acf458d \ + lazy-cms-fix=37a17fd02 +``` + +Each scenario gets its own binary, home dir, and port set (offset by 100). Results are printed at the end with median/avg/min/max TPS. Raw data in `/tmp/sei-bench/