feat: support standard rpc getprogramaccounts for compression#2304
feat: support standard rpc getprogramaccounts for compression#2304sergeytimoshin wants to merge 1 commit intosergey/photon-combined-discriminator-datafrom
Conversation
feat: add support for getProgramAccounts standard rpc calls for compression feat: structured error logging
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. 🗂️ Base branches to auto review (2)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Logic Review: Three issues found in the new per-tree init-lock design. The design itself is correct and an improvement over the old Entry API pattern — no data corruption or immediate deadlocks. F-01 (Low) — Epoch consistency window: map updated before processor In the epoch-transition branch, the DashMap is updated to the new epoch before // line ~2825: map now advertises epoch 6
self.state_processors.insert(tree_accounts.merkle_tree, (epoch_info.epoch, processor_clone.clone()));
// processor internal state still epoch 5 here
processor_clone.lock().await.update_epoch(epoch_info.epoch, epoch_info.phases.clone());Any concurrent caller that holds the processor Arc (obtained from a prior Fix: swap the order — update the processor first, then insert into the map. F-02 (Medium) — The constraint-error handler removes the processor from the map without acquiring the init lock: self.state_processors.remove(&tree_accounts.merkle_tree); // no init lock heldConcurrent scenario:
Either acquire the init lock in the error handler before removing, or add an explicit comment that this is intentional best-effort invalidation and document the bounded duplicate-submission risk. F-03 (Medium) — Latent deadlock: processor lock held across The pre-warm path acquires the processor lock and holds it across a long async operation: let mut p = processor.lock().await; // processor lock acquired
p.prewarm_from_indexer(cache, ...).await // long async holdMeanwhile, the epoch-transition branch inside let _init_guard = init_lock.lock().await; // init lock held
// ...
processor_clone.lock().await.update_epoch(...) // tries to acquire processor lockIf any future code path causes Suggested guard comment at the top of // SAFETY: Callers must not hold this tree's processor Mutex when calling this function.
// The epoch-transition path acquires the processor lock internally. Holding it on entry
// will deadlock for same-tree epoch transitions. |
No description provided.