perf(store): replace sync.Map with plain map in cachekv stores#2855
perf(store): replace sync.Map with plain map in cachekv stores#2855
Conversation
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
Within OCC, each worker gets its own CacheMultiStore chain with
dedicated cachekv.Store instances — zero concurrent access. The
sync.Map thread-safety overhead (CAS, internal node allocs) is
pure waste in this single-goroutine context.
Replace *sync.Map with typed Go maps in both sei-cosmos and giga
cachekv implementations:
- cache: map[string]*types.CValue
- deleted: map[string]struct{}
- unsortedCache: map[string]struct{}
Keep sync.RWMutex for defense-in-depth (uncontested, <20ns).
Micro-benchmarks: 32-56% faster across all cachekv operations,
4 allocs/op eliminated from Set paths.
TPS benchmark: ~8,936 avg (+14.6% cumulative over origin/main).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
bb5063e to
bbff2d7
Compare
08c4efe to
8d2dc33
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2855 +/- ##
==========================================
- Coverage 46.28% 46.23% -0.05%
==========================================
Files 2005 2005
Lines 163018 163027 +9
==========================================
- Hits 75449 75374 -75
- Misses 81047 81131 +84
Partials 6522 6522
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| for key, cv := range store.cache { | ||
| // Replace with a clean (non-dirty) version of the same value | ||
| store.cache.Store(key, types.NewCValue(cv.Value(), false)) | ||
| return true | ||
| }) | ||
| store.cache[key] = types.NewCValue(cv.Value(), false) | ||
| } |
Check warning
Code scanning / CodeQL
Iteration over map Warning
| for key, cv := range store.cache { | ||
| kbz := []byte(key) | ||
| if bytes.Compare(kbz, start) < 0 || bytes.Compare(kbz, end) >= 0 { | ||
| // we don't want to break out of the iteration since cache isn't sorted | ||
| return true | ||
| continue | ||
| } | ||
| cv := value.(*types.CValue) | ||
| if cv.Value() == nil { | ||
| delete(keyStrs, key.(string)) | ||
| delete(keyStrs, key) | ||
| } else { | ||
| keyStrs[key.(string)] = struct{}{} | ||
| keyStrs[key] = struct{}{} | ||
| } | ||
| return true | ||
| }) | ||
| } |
Check warning
Code scanning / CodeQL
Iteration over map Warning
| for cKey := range store.unsortedCache { | ||
| if dbm.IsKeyInDomain(conv.UnsafeStrToBytes(cKey), start, end) { | ||
| cacheValue, ok := store.cache.Load(key) | ||
| if ok { | ||
| unsorted = append(unsorted, &kv.Pair{Key: []byte(cKey), Value: cacheValue.(*types.CValue).Value()}) | ||
| if cacheValue, ok := store.cache[cKey]; ok { | ||
| unsorted = append(unsorted, &kv.Pair{Key: []byte(cKey), Value: cacheValue.Value()}) | ||
| } | ||
| } | ||
| return true | ||
| }) | ||
| } |
Check warning
Code scanning / CodeQL
Iteration over map Warning
| for key, cv := range store.cache { | ||
| kbz := []byte(key) | ||
| if bytes.Compare(kbz, start) < 0 || bytes.Compare(kbz, end) >= 0 { | ||
| // we don't want to break out of the iteration since cache isn't sorted | ||
| return true | ||
| continue | ||
| } | ||
| cv := value.(*types.CValue) | ||
| if cv.Value() == nil { | ||
| delete(keyStrs, key.(string)) | ||
| delete(keyStrs, key) | ||
| } else { | ||
| keyStrs[key.(string)] = struct{}{} | ||
| keyStrs[key] = struct{}{} | ||
| } | ||
| return true | ||
| }) | ||
| } |
Check warning
Code scanning / CodeQL
Iteration over map Warning
Summary
sync.Mapwith plainmapin cachekv stores for better performanceStack
8/19 — depends on perf/skip-redundant-snapshot (replaces auto-closed #2822)
🤖 Generated with Claude Code