Conversation
- Cache Prometheus WithLabelValues at init (metrics.go): eliminates per-height map lookups and slice allocs for sync state, backfill stages, and store operations. - Move transient error needles to package-level var (celestia_node.go): avoids per-retry []string heap allocation. - Cache bearer auth metadata map at construction (celestia_app.go): avoids map+string alloc per gRPC call. - Replace map[string]any with struct in MarshalBlob (service.go): ~200 bytes less per blob marshal. - Preallocate blob slices in GetBlobs, BlobGetAll, gRPC GetAll: reduces append doublings on typical result sets. - Optimize notifier filterEvent: skip allocation when all/no blobs match; use exact-capacity slice for partial matches. - Skip namespace map allocation for header-only subscriptions. - Use slice indexing in httpToWS instead of TrimPrefix. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThis PR optimizes memory allocation and caching across multiple layers: preallocation of slices reduces reallocations in the API and storage layers, metadata for bearer tokens is cached in gRPC credentials, transient error substrings are moved to package-level storage, metrics pre-resolves known labels to avoid per-call lookups, and JSON marshaling uses a struct instead of dynamic maps. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pkg/api/service.go (1)
73-73: Cap the preallocation hint defensively.Line 73 scales capacity directly with
len(namespaces). A small cap keeps the optimization but avoids large upfront allocations on pathological inputs.♻️ Suggested tweak
- allBlobs := make([]types.Blob, 0, len(namespaces)*8) // preallocate for typical workload + const maxNamespacePrealloc = 64 + nsCap := len(namespaces) + if nsCap > maxNamespacePrealloc { + nsCap = maxNamespacePrealloc + } + allBlobs := make([]types.Blob, 0, nsCap*8) // preallocate for typical workload🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/api/service.go` at line 73, The preallocation for allBlobs (created as allBlobs := make([]types.Blob, 0, len(namespaces)*8)) should be defensively capped to avoid huge allocations for pathological namespace counts; compute a capacity value from len(namespaces)*8 then clamp it to a reasonable max (e.g., 1024 or another project-appropriate constant) before passing it into make so allBlobs uses the smaller clamped capacity; update the allocation site where allBlobs is created to use the clamped cap.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/api/service.go`:
- Line 73: The preallocation for allBlobs (created as allBlobs :=
make([]types.Blob, 0, len(namespaces)*8)) should be defensively capped to avoid
huge allocations for pathological namespace counts; compute a capacity value
from len(namespaces)*8 then clamp it to a reasonable max (e.g., 1024 or another
project-appropriate constant) before passing it into make so allBlobs uses the
smaller clamped capacity; update the allocation site where allBlobs is created
to use the clamped cap.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
pkg/api/grpc/blob_service.gopkg/api/notifier.gopkg/api/service.gopkg/fetch/celestia_app.gopkg/fetch/celestia_node.gopkg/metrics/metrics.gopkg/store/sqlite.go
Overview
Summary by CodeRabbit
Release Notes