Conversation
Manual Deploy AvailableYou can trigger a manual deploy of this PR branch to testnet: Alternative: Comment
Comment updated automatically when the PR is synchronized. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis pull request adds comprehensive support for compressed accounts and delegations to the MagicBlock validator. The changes introduce two new crates ( Assessment against linked issues
Out-of-scope changes
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
test-integration/test-runner/bin/run_tests.rs (2)
244-253: Inconsistent cleanup: Light validator started but cleaned up as devnet.The validator is started with
ValidatorCluster::Light(line 246), but cleanup paths at lines 275, 292, and 301 usecleanup_devnet_only. For Light validators, you should usecleanup_light_validatorto ensure the light test-validator processes are properly stopped, as done inrun_chainlink_tests.🐛 Proposed fix
@@ -272,7 +272,7 @@ fn run_table_mania_and_committor_tests( match run_test(test_table_mania_dir, Default::default()) { Ok(output) => output, Err(err) => { eprintln!("Failed to run table-mania: {:?}", err); - cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); return Err(err.into()); } } @@ -289,7 +289,7 @@ fn run_table_mania_and_committor_tests( match run_test(test_committor_dir, Default::default()) { Ok(output) => output, Err(err) => { eprintln!("Failed to run committor: {:?}", err); - cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); return Err(err.into()); } } @@ -298,7 +298,7 @@ fn run_table_mania_and_committor_tests( success_output() }; - cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); Ok((table_mania_test_output, committor_test_output))
304-312: Light validator passed to wrong parameter position inwait_for_ctrlc.The
devnet_validatorhere is actually a Light validator (started withValidatorCluster::Light), but it's passed as the first argument (devnet_validatorposition). According to thewait_for_ctrlcsignature, the third parameter is forlight_validator. This will cause incorrect cleanup behavior.🐛 Proposed fix
- wait_for_ctrlc(devnet_validator, None, None, success_output())?, + wait_for_ctrlc(None, None, devnet_validator, success_output())?,magicblock-committor-program/src/state/chunks.rs (2)
136-140:.expect()usage in production code.While the invariant is logically sound (iteration bounds guarantee
is_chunk_deliveredreturnsSome), the coding guidelines require either proper error handling or explicit invariant documentation.🔧 Suggested fix with filter_map
pub fn get_missing_chunks(&self) -> HashSet<usize> { (0..self.count) - .filter(|&i| !self.is_chunk_delivered(i).expect("invariant")) + .filter(|&i| { + // INVARIANT: i is always < self.count within this range, + // so is_chunk_delivered always returns Some + !self.is_chunk_delivered(i).unwrap_or(false) + }) .collect() }Alternatively, add an explicit
// SAFETY:comment documenting why this cannot fail.
147-158:.expect()usage inFromimplementation.Similar to above—the invariant is valid (indices always within bounds), but per guidelines this should have either explicit error handling or documented invariant reasoning.
🔧 Suggested fix with explicit comment
impl From<(Vec<bool>, u16)> for Chunks { fn from((vec, chunk_size): (Vec<bool>, u16)) -> Self { let mut this = Chunks::new(vec.len(), chunk_size); vec.into_iter().enumerate().for_each(|(i, d)| { if d { - this.set_chunk_delivered(i).expect("invariant"); + // INVARIANT: i < vec.len() == this.count, so set_chunk_delivered + // always succeeds. Ignoring error is safe as it cannot occur. + let _ = this.set_chunk_delivered(i); } }); this } }This pattern silently ignores the (impossible) error while documenting the invariant.
magicblock-accounts/src/scheduled_commits_processor.rs (1)
186-187: Consider replacing.expect()with proper error handling.The
result_subscriber.await.expect(SUBSCRIPTION_ERR_MSG)will panic if theCommittorServiceis dropped before sending the receiver. While this may indicate a severe initialization failure, per coding guidelines,.expect()in production code should be replaced with proper error handling or documented as an invariant.Suggested fix
- let mut result_receiver = - result_subscriber.await.expect(SUBSCRIPTION_ERR_MSG); + let mut result_receiver = match result_subscriber.await { + Ok(receiver) => receiver, + Err(_) => { + error!("{}", SUBSCRIPTION_ERR_MSG); + return; + } + };magicblock-committor-service/src/intent_executor/task_info_fetcher.rs (1)
183-203: Retrying onDeserializeErrormay not be effective.Deserialization errors (line 188-190) are typically deterministic - if the data can't be deserialized, retrying won't help unless the underlying data changes. Consider breaking immediately on deserialization errors similar to
NoCompressedAccountandNoCompressedData.♻️ Suggested change
TaskInfoFetcherError::NoCompressedAccount(_) => break Err(err), TaskInfoFetcherError::NoCompressedData(_) => break Err(err), - TaskInfoFetcherError::DeserializeError(ref err) => { - warn!("Deserialize compressed delegation record error: {:?}, attempt: {}", err, i); - } + TaskInfoFetcherError::DeserializeError(_) => break Err(err),
🤖 Fix all issues with AI agents
In @.github/workflows/ci-test-integration.yml:
- Around line 74-79: Add an inline comment next to the "Install Photon indexer"
step that explains why the specific git revision
ac7df6c388db847b7693a7a1cb766a7c9d7809b5 was pinned (e.g., date pinned and/or
the specific bugfix/feature or compatibility reason), so update the CI step
around the run: cargo install ... line to include that brief justification.
- Around line 74-91: The three steps ("Install Photon indexer", "Setup Node.js",
"Install zk-compression CLI") repeat the same long if condition; extract it into
a reusable expression (e.g., a workflow/job-level boolean or use contains()) and
reference that instead of repeating the OR chain. Update each step to use the
new expression (or contains('table_mania,committor,chainlink',
matrix.batch_tests)) and ensure they still run only when matrix.batch_tests
matches one of the three values.
In `@compressed-delegation-client/src/lib.rs`:
- Around line 1-2: Remove the crate-level suppression attributes at the top of
lib.rs and instead apply them narrowly to the generated code module: move or add
#[allow(deprecated)] and #[allow(unused_imports)] onto the module declaration
for generated_original (e.g., the mod generated_original { ... } or its mod
block/file), leaving the rest of the crate without those blanket allows so
handwritten code will still get warnings.
In `@config.example.toml`:
- Line 5: Remove the trailing whitespace at the end of the comment line
containing "ALL FIELDS ARE OPTIONAL." in config.example.toml (the line that
currently reads "# ALL FIELDS ARE OPTIONAL. If removed, the "Default" value
listed in the "); trim the extra space after "OPTIONAL." so the line ends
cleanly with the period.
In `@docs/compression-documentation.md`:
- Around line 337-340: The "Benefits" section currently claims "- **Storage
Efficiency**: ~10x reduction in on-chain storage costs"; either add a citation
to support the exact figure (e.g., a footnote linking Light Protocol docs or a
benchmark) or soften the wording to a non-specific claim like "significant
reduction in on-chain storage costs"; update the bullet under the "Benefits"
heading accordingly and, if adding a citation, include the source reference
inline or as a footnote referencing the Light Protocol documentation or relevant
benchmark.
- Around line 348-353: Create two GitHub issues to track the planned
enhancements: one titled "Replace ArgTask with BufferTask to support larger
accounts" and one titled "Support onboarding using user-created compressed
accounts (remove empty PDA requirement)"; in each issue include a short
description referencing the exact feature text from the doc (mention BufferTask
vs ArgTask, and the PDA/empty-PDA constraint for compressed accounts), add clear
acceptance criteria (e.g., BufferTask integrated and tested with large account
flows; compressed-account flow works without requiring empty PDA and is covered
by unit/integration tests), suggest labels (enhancement, priority/medium,
needs-design), and link back to this documentation section so reviewers can
triage and assign owners.
In `@magicblock-chainlink/src/remote_account_provider/mod.rs`:
- Around line 1042-1084: The retry counter logic using `remaining_retries -= 1`
in the fetch loop should use `saturating_sub(1)` to make the intention explicit
and avoid accidental underflow if `MAX_RETRIES` changes; update the decrement of
`remaining_retries` (the variable initialized from `MAX_RETRIES`) to
`remaining_retries = remaining_retries.saturating_sub(1)` and keep the existing
`if remaining_retries == 0` check and early return path in the same loop (within
the function that contains the call to `Self::fetch_from_photon`).
In `@magicblock-chainlink/src/remote_account_provider/photon_client.rs`:
- Around line 147-159: Add Rust doc comments to the PhotonClient trait methods
get_account() and get_multiple_accounts() explaining that Account objects
returned for delegated compressed accounts will have lamports set to 0 (the
invariant set by account_from_compressed_account), and that actual lamports are
restored later during undelegation (see process_external_undelegate_compressed
and CompressedDelegationRecord in flexi-counter); ensure the docs state this
behavior clearly so callers know they must not rely on lamports being present
and that undelegation logic re-applies lamports.
In `@magicblock-committor-service/src/intent_executor/two_stage_executor.rs`:
- Around line 259-280: The closure commit_slot_fn currently swallows RPC errors
by calling .ok() on the get_transaction result; update the closure to inspect
the Result, log any Err (at debug or warn) with context including the sig and
that get_transaction failed, and then return None for the error case; to do so
capture a logger (e.g., clone a logger from self.inner or use tracing) alongside
rpc_client and sig in the Arc::new move || { ... } closure, match on the .await
result from rpc_client.get_transaction(...) and call the logger (e.g.,
logger.debug/warn!("failed to fetch transaction slot for sig {}: {:?}", sig,
err)) before returning Ok(slot) or None.
In `@magicblock-committor-service/src/tasks/args_task.rs`:
- Around line 271-309: The account size arrays for
ArgsTaskType::CompressedCommit, ::CompressedFinalize, and ::CompressedUndelegate
are identical; refactor by extracting the repeated AccountSizeClass array into a
single constant or helper function (e.g., COMPRESSED_ACCOUNT_BUDGET or fn
compressed_account_budget() -> &[AccountSizeClass]) and have each branch call
total_size_budget with that shared value to remove duplication and keep changes
centralized.
In `@magicblock-committor-service/src/tasks/mod.rs`:
- Around line 203-207: Replace the `.expect("invariant")` usages with proper
error handling: update the call sites in get_missing_chunks, from_delivered, and
ChunksIter to propagate errors (return Result) from
is_chunk_delivered/set_chunk_delivered instead of panicking, or—if the condition
truly is an internal invariant—replace `.expect("invariant")` with a documented
debug-time assertion (e.g., debug_assert!) and return a Result/Error on
production paths; adjust the signatures of get_missing_chunks, from_delivered,
and the iterator behavior to return Result<T, E> (or convert is_chunk_delivered
to a non-fallible API) so callers can handle errors rather than panicking.
In `@magicblock-core/src/compression/mod.rs`:
- Around line 14-26: In derive_cda_from_pda, the .expect() on
hashv_to_bn254_field_size_be_const_array needs an explicit SAFETY justification:
add a comment starting with "SAFETY:" immediately above the .expect() that
states the invariant that Pubkey::to_bytes() always yields a 32-byte array and
therefore the BN254 field-size hash cannot fail for this input, referencing
hashv_to_bn254_field_size_be_const_array and Pubkey::to_bytes() so future
readers know why the unwrap is considered infallible.
In `@programs/magicblock/src/utils/instruction_utils.rs`:
- Around line 368-379: PDAs in the non-undelegation branch are incorrectly
marked as signers; change the branch that currently does
AccountMeta::new_readonly(*pubkey, true) to AccountMeta::new_readonly(*pubkey,
false) so PDAs in the pdas loop are always non-signers except for the explicit
undelegation path (is_undelegation using
MagicBlockInstruction::ScheduleCommitAndUndelegate |
ScheduleCompressedCommitAndUndelegate) and continue pushing into account_metas
accordingly.
In `@test-integration/test-committor-service/tests/utils/transactions.rs`:
- Around line 77-122: The shorthand macro arm has a parameter/name mismatch: it
declares ($rpc_client:ident, $pubkey:expr, $label:literal) but then calls
get_compressed_account! using $rpc_client; change the shorthand variant to use
the same names as the main arm (e.g., declare ($photon_client:ident,
$address:expr, $label:literal) and call get_compressed_account!($photon_client,
$address, $label, |_: &light_client::indexer::CompressedAccount, _: u8| true))
so the identifiers match and the macro compiles.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@magicblock-committor-service/src/tasks/args_task.rs`:
- Around line 353-366: The set_compressed_data method silently ignores calls for
non-compressed ArgsTaskType variants; update set_compressed_data to assert or
log when invoked on a non-compressed variant (e.g., use debug_assert!(false,
...) or process_logger.warn) so misuse is caught during development;
specifically modify fn set_compressed_data(&mut self, compressed_data:
CompressedData) to handle the wildcard arm by emitting a debug assertion or
warning referencing ArgsTaskType::CompressedCommit,
ArgsTaskType::CompressedFinalize, and ArgsTaskType::CompressedUndelegate (and
include the current self.task_type in the log/assert message for context).
- Around line 155-165: The CompressedUndelegate branch builds an instruction but
omits the required .args(...) call; update the
ArgsTaskType::CompressedUndelegate chain (where
UndelegateBuilder::new().payer(...).delegated_account(...).owner_program(...).system_program(...).add_remaining_accounts(...).instruction())
to insert .args(UndelegateArgs { validity_proof: value.compressed_data.proof,
delegation_record_account_meta: value.compressed_data.account_meta,
compressed_delegated_record:
value.compressed_data.compressed_delegation_record_bytes }) before calling
.instruction() so the UndelegateBuilder receives the needed metadata.
In `@test-integration/test-committor-service/tests/utils/transactions.rs`:
- Around line 307-309: Several test utilities use a hardcoded RPC URL string
instead of the RPC_URL constant; update all RpcClient/PhotonIndexer
instantiations to use RPC_URL (e.g., replace
RpcClient::new("http://localhost:7799".to_string()) and any
PhotonIndexer::new("http://localhost:7799", ...) with
RpcClient::new(RPC_URL.to_string()) and PhotonIndexer::new(RPC_URL.to_string(),
...) as appropriate) in the functions init_and_delegate_account_on_chain,
init_and_delegate_order_book_on_chain,
fund_validator_auth_and_ensure_validator_fees_vault so they match
init_and_delegate_compressed_account_on_chain and ensure consistent use of the
RPC_URL constant.
♻️ Duplicate comments (2)
test-integration/test-tools/src/validator.rs (1)
188-194: Hardcoded prover port may cause test flakiness in parallel runs.The prover port is hardcoded to 3001, while other ports (
rpc_port,metrics_port) useresolve_port()for dynamic allocation. With tests running in parallel, this risks port conflicts.Consider using
resolve_port()forprover_portto match the pattern used elsewhere in the test infrastructure.magicblock-committor-service/src/tasks/args_task.rs (1)
271-279: Good refactoring: duplication addressed with helper function.The previous review flagged duplicate budget structures. This is now resolved by the
compressed_task_accounts_size_budget()helper at lines 405-417. Consider further consolidating with pattern matching:ArgsTaskType::CompressedCommit(_) | ArgsTaskType::CompressedFinalize(_) | ArgsTaskType::CompressedUndelegate(_) => compressed_task_accounts_size_budget(),
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
config.example.tomlmagicblock-committor-service/src/intent_executor/two_stage_executor.rsmagicblock-committor-service/src/tasks/args_task.rsmagicblock-core/src/compression/mod.rstest-integration/test-committor-service/tests/utils/transactions.rstest-integration/test-tools/src/validator.rs
🧰 Additional context used
📓 Path-based instructions (2)
{magicblock-*,programs,storage-proto}/**
⚙️ CodeRabbit configuration file
{magicblock-*,programs,storage-proto}/**: Treat any usage of.unwrap()or.expect()in production Rust code as a MAJOR issue.
These should not be categorized as trivial or nit-level concerns.
Request proper error handling or explicit justification with invariants.
Files:
magicblock-core/src/compression/mod.rsmagicblock-committor-service/src/intent_executor/two_stage_executor.rsmagicblock-committor-service/src/tasks/args_task.rs
{test-*,tools}/**
⚙️ CodeRabbit configuration file
Usage of
.unwrap()or.expect()in test code is acceptable and may be treated as trivial.
Files:
test-integration/test-tools/src/validator.rstest-integration/test-committor-service/tests/utils/transactions.rs
🧠 Learnings (17)
📓 Common learnings
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-chainlink/tests/ix_01_ensure-accounts.rs:85-114
Timestamp: 2025-11-19T11:31:24.218Z
Learning: In the magicblock-validator codebase, compressed delegation records are never subscribed to by design. Tests for compressed delegation flows should focus on verifying that the delegated account (e.g., counter PDA) is cloned as delegated and not subscribed, but do not need to check for subscription to delegation-record-like PDAs in the compressed path because subscriptions to compressed delegation records never occur.
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/tests/04_redeleg_other_separate_slots.rs:158-165
Timestamp: 2025-11-18T08:47:39.702Z
Learning: In magicblock-chainlink tests involving compressed accounts, `set_remote_slot()` sets the slot of the `AccountSharedData`, while `compressed_account_shared_with_owner_and_slot()` sets the slot of the delegation record. These are two different fields and both calls are necessary.
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-committor-service/tests/test_ix_commit_local.rs:867-881
Timestamp: 2025-12-03T09:33:48.707Z
Learning: Repo: magicblock-labs/magicblock-validator PR: 639
Context: test-integration/test-committor-service/tests/test_ix_commit_local.rs (ix_commit_local)
Learning: The PhotonIndexer used for compressed account fetches (get_compressed_account) has built‑in retry logic (defaults to ~10 attempts), so tests should not add separate retry loops around compressed fetches unless there’s a specific need.
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs:172-193
Timestamp: 2025-11-26T13:27:46.702Z
Learning: The Photon indexer (PhotonIndexer from light-client) handles slot validation internally when an IndexerRpcConfig with a slot constraint is provided. Additional slot validation at the call site is not needed when fetching compressed data via get_compressed_data.
📚 Learning: 2025-12-03T09:33:48.707Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-committor-service/tests/test_ix_commit_local.rs:867-881
Timestamp: 2025-12-03T09:33:48.707Z
Learning: Repo: magicblock-labs/magicblock-validator PR: 639
Context: test-integration/test-committor-service/tests/test_ix_commit_local.rs (ix_commit_local)
Learning: The PhotonIndexer used for compressed account fetches (get_compressed_account) has built‑in retry logic (defaults to ~10 attempts), so tests should not add separate retry loops around compressed fetches unless there’s a specific need.
Applied to files:
magicblock-core/src/compression/mod.rsmagicblock-committor-service/src/intent_executor/two_stage_executor.rstest-integration/test-tools/src/validator.rstest-integration/test-committor-service/tests/utils/transactions.rsmagicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-12-03T09:36:01.527Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/src/remote_account_provider/mod.rs:1350-1353
Timestamp: 2025-12-03T09:36:01.527Z
Learning: Repo: magicblock-labs/magicblock-validator
File: magicblock-chainlink/src/remote_account_provider/mod.rs
Context: consolidate_fetched_remote_accounts
Learning: For unexpected result counts (>2), the project prefers logging an error and returning an empty Vec over panicking; acceptable during development per maintainer (Dodecahedr0x).
Applied to files:
magicblock-core/src/compression/mod.rsmagicblock-committor-service/src/intent_executor/two_stage_executor.rstest-integration/test-tools/src/validator.rstest-integration/test-committor-service/tests/utils/transactions.rsmagicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-11-19T11:31:24.218Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-chainlink/tests/ix_01_ensure-accounts.rs:85-114
Timestamp: 2025-11-19T11:31:24.218Z
Learning: In the magicblock-validator codebase, compressed delegation records are never subscribed to by design. Tests for compressed delegation flows should focus on verifying that the delegated account (e.g., counter PDA) is cloned as delegated and not subscribed, but do not need to check for subscription to delegation-record-like PDAs in the compressed path because subscriptions to compressed delegation records never occur.
Applied to files:
magicblock-core/src/compression/mod.rstest-integration/test-committor-service/tests/utils/transactions.rsmagicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-11-21T10:22:07.520Z
Learnt from: taco-paco
Repo: magicblock-labs/magicblock-validator PR: 661
File: magicblock-committor-service/src/intent_executor/single_stage_executor.rs:20-28
Timestamp: 2025-11-21T10:22:07.520Z
Learning: In magicblock-committor-service's SingleStageExecutor and TwoStageExecutor (single_stage_executor.rs and two_stage_executor.rs), the fields transaction_strategy, junk, and patched_errors are intentionally public because these executors are designed to be used independently outside of the IntentExecutor scope, and callers need access to these execution reports for cleanup and error handling.
Applied to files:
magicblock-committor-service/src/intent_executor/two_stage_executor.rs
📚 Learning: 2025-10-21T14:00:54.642Z
Learnt from: bmuddha
Repo: magicblock-labs/magicblock-validator PR: 578
File: magicblock-aperture/src/requests/websocket/account_subscribe.rs:18-27
Timestamp: 2025-10-21T14:00:54.642Z
Learning: In magicblock-aperture account_subscribe handler (src/requests/websocket/account_subscribe.rs), the RpcAccountInfoConfig fields data_slice, commitment, and min_context_slot are currently ignored—only encoding is applied. This is tracked as technical debt in issue `#579`: https://github.com/magicblock-labs/magicblock-validator/issues/579
Applied to files:
magicblock-committor-service/src/intent_executor/two_stage_executor.rstest-integration/test-committor-service/tests/utils/transactions.rsmagicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-11-07T13:09:52.253Z
Learnt from: bmuddha
Repo: magicblock-labs/magicblock-validator PR: 589
File: test-kit/src/lib.rs:275-0
Timestamp: 2025-11-07T13:09:52.253Z
Learning: In test-kit, the transaction scheduler in ExecutionTestEnv is not expected to shut down during tests. Therefore, using `.unwrap()` in test helper methods like `schedule_transaction` is acceptable and will not cause issues in the test environment.
Applied to files:
magicblock-committor-service/src/intent_executor/two_stage_executor.rs
📚 Learning: 2025-11-19T09:34:37.917Z
Learnt from: thlorenz
Repo: magicblock-labs/magicblock-validator PR: 621
File: test-integration/test-chainlink/tests/ix_remote_account_provider.rs:62-63
Timestamp: 2025-11-19T09:34:37.917Z
Learning: In test-integration/test-chainlink/tests/ix_remote_account_provider.rs and similar test files, the `_fwd_rx` receiver returned by `init_remote_account_provider()` is intentionally kept alive (but unused) to prevent "receiver dropped" errors on the sender side. The pattern `let (remote_account_provider, _fwd_rx) = init_remote_account_provider().await;` should NOT be changed to `let (remote_account_provider, _) = ...` because dropping the receiver would cause send() operations to fail.
Applied to files:
magicblock-committor-service/src/intent_executor/two_stage_executor.rstest-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-11-07T14:20:31.457Z
Learnt from: thlorenz
Repo: magicblock-labs/magicblock-validator PR: 621
File: magicblock-chainlink/src/remote_account_provider/chain_pubsub_actor.rs:457-495
Timestamp: 2025-11-07T14:20:31.457Z
Learning: In magicblock-chainlink/src/remote_account_provider/chain_pubsub_client.rs, the unsubscribe closure returned by PubSubConnection::account_subscribe(...) resolves to () (unit), not a Result. Downstream code should not attempt to inspect an unsubscribe result and can optionally wrap it in a timeout to guard against hangs.
Applied to files:
magicblock-committor-service/src/intent_executor/two_stage_executor.rstest-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-11-19T12:55:48.931Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-tools/src/validator.rs:193-197
Timestamp: 2025-11-19T12:55:48.931Z
Learning: In the magicblock-validator codebase, when constructing arguments for the light CLI test validator in start_light_validator_with_config (test-integration/test-tools/src/validator.rs), the shlex::split approach is necessary because the light CLI does not handle whitespaces properly in validator arguments. The string concatenation + shlex parsing pattern should not be refactored to direct argument construction in this specific case.
Applied to files:
test-integration/test-tools/src/validator.rs
📚 Learning: 2025-11-18T08:47:39.702Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/tests/04_redeleg_other_separate_slots.rs:158-165
Timestamp: 2025-11-18T08:47:39.702Z
Learning: In magicblock-chainlink tests involving compressed accounts, `set_remote_slot()` sets the slot of the `AccountSharedData`, while `compressed_account_shared_with_owner_and_slot()` sets the slot of the delegation record. These are two different fields and both calls are necessary.
Applied to files:
test-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-11-26T13:27:46.702Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs:172-193
Timestamp: 2025-11-26T13:27:46.702Z
Learning: The Photon indexer (PhotonIndexer from light-client) handles slot validation internally when an IndexerRpcConfig with a slot constraint is provided. Additional slot validation at the call site is not needed when fetching compressed data via get_compressed_data.
Applied to files:
test-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-12-01T16:02:05.367Z
Learnt from: thlorenz
Repo: magicblock-labs/magicblock-validator PR: 703
File: magicblock-chainlink/src/submux/mod.rs:652-654
Timestamp: 2025-12-01T16:02:05.367Z
Learning: In magicblock-chainlink/src/submux/mod.rs, the subscribe_program method intentionally adds program_id to program_subs before attempting the subscription. This ensures that even if the initial subscription fails or only partially succeeds across clients, the reconnection logic will retry the subscription. This is a deliberate design pattern for resilience in the multi-client architecture and should not be "fixed" to remove entries on failure.
Applied to files:
test-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-10-14T09:56:14.047Z
Learnt from: taco-paco
Repo: magicblock-labs/magicblock-validator PR: 564
File: test-integration/programs/flexi-counter/src/processor/call_handler.rs:122-125
Timestamp: 2025-10-14T09:56:14.047Z
Learning: The file test-integration/programs/flexi-counter/src/processor/call_handler.rs contains a test smart contract used for integration testing, not production code.
Applied to files:
test-integration/test-committor-service/tests/utils/transactions.rs
📚 Learning: 2025-11-07T13:20:13.793Z
Learnt from: bmuddha
Repo: magicblock-labs/magicblock-validator PR: 589
File: magicblock-processor/src/scheduler/coordinator.rs:227-238
Timestamp: 2025-11-07T13:20:13.793Z
Learning: In magicblock-processor's ExecutionCoordinator (scheduler/coordinator.rs), the `account_contention` HashMap intentionally does not call `shrink_to_fit()`. Maintaining slack capacity is beneficial for performance by avoiding frequent reallocations during high transaction throughput. As long as empty entries are removed from the map (which `clear_account_contention` does), the capacity overhead is acceptable.
Applied to files:
magicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-11-25T11:07:20.001Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/programs/flexi-counter/src/processor.rs:643-680
Timestamp: 2025-11-25T11:07:20.001Z
Learning: In the compressed delegation architecture, ownership reassignment during undelegation is handled by the delegation program itself, not by the owner program's external undelegate handler. The external undelegate handler (e.g., `process_external_undelegate_compressed` in flexi-counter) is a callback for program-specific cleanup (restoring lamports and data) after the delegation program has already reassigned ownership back to the original owner stored in the CompressedDelegationRecord.
Applied to files:
magicblock-committor-service/src/tasks/args_task.rs
📚 Learning: 2025-10-26T08:49:31.543Z
Learnt from: taco-paco
Repo: magicblock-labs/magicblock-validator PR: 585
File: magicblock-committor-service/src/tasks/buffer_task.rs:111-115
Timestamp: 2025-10-26T08:49:31.543Z
Learning: In the magicblock-committor-service, compute units returned by the `compute_units()` method in task implementations (such as `BufferTask`, `ArgsTask`, etc.) represent the compute budget for a single task. Transactions can comprise multiple tasks, and the total compute budget for a transaction is computed as the sum of the compute units of all tasks included in that transaction.
Applied to files:
magicblock-committor-service/src/tasks/args_task.rs
🧬 Code graph analysis (2)
magicblock-core/src/compression/mod.rs (1)
test-integration/programs/flexi-counter/src/state.rs (1)
pda(45-48)
magicblock-committor-service/src/tasks/args_task.rs (1)
magicblock-committor-service/src/tasks/mod.rs (6)
preparation_state(80-80)task_type(99-99)is_compressed(105-105)set_compressed_data(108-108)get_compressed_data(111-111)delegated_account(114-114)
🔇 Additional comments (23)
config.example.toml (2)
61-65: LGTM!The indentation adjustment in the
remotesarray improves visual consistency.
256-268: LGTM!The new
[compression]section is well-documented and follows the established patterns in this configuration file:
- Clear section header with banner comments
- Both
photon-urlandapi-keyare appropriately commented out by default- Environment variable naming follows the
MBV_COMPRESSION__*convention- Default values and descriptions are consistent with PR objectives
This aligns with the PR's goal of adding configuration for compressed account delegation.
test-integration/test-tools/src/validator.rs (3)
86-86: LGTM!Clean refactor extracting the account list into a reusable helper, maintaining parity with the original inline definition.
123-216: Function structure looks good.The function correctly:
- Filters out args that light test-validator sets internally.
- Reuses the
devnet_accountshelper for consistency.- Uses
shlexfor arg wrapping (necessary due to light CLI whitespace handling, as per learnings).- Waits for the prover port since it's the last component to start.
389-424: LGTM!Clean helper that centralizes the devnet account definitions. The fixed-size array return type is appropriate, and the separation between dynamically-resolved accounts (from
loaded_accounts) and static test fixtures is clear.magicblock-core/src/compression/mod.rs (3)
1-3: LGTM!Imports are appropriate and all are used within the module.
14-27: LGTM - SAFETY justification addresses prior feedback.The
// SAFETY:comment on line 17 documents the invariant that a 32-byte PDA seed guarantees successful BN254 hashing, satisfying the coding guidelines requirement for explicit justification of.expect()in production code.
35-43: LGTM!The test provides deterministic verification of the CDA derivation logic with known input/output pairs, ensuring regressions are caught if the derivation changes.
magicblock-committor-service/src/intent_executor/two_stage_executor.rs (2)
221-244: LGTM! Compressed flag properly propagated in unfinalized account recovery.The
compressedfield is correctly derived fromtask.is_compressed()for both the execution strategy and the cleanup strategy, ensuring compressed account handling is consistent through the recovery path.
259-283: LGTM! Well-structured commit slot fetching with proper error logging.The closure correctly:
- Captures and clones the RPC client to avoid lifetime issues
- Logs errors via
.inspect_err()before converting toOption- Uses appropriate commitment level for transaction fetching
- Has no unsafe error suppression patterns
The
max_supported_transaction_version: Some(0)setting is consistent with the rest of the committor service codebase and appears intentional.test-integration/test-committor-service/tests/utils/transactions.rs (6)
77-122: LGTM!The macro correctly mirrors the
get_accountpattern for compressed accounts. The past issue with parameter name mismatch has been fixed—$photon_clientis now used consistently in both variants.Based on learnings, the PhotonIndexer already has built-in retry logic (~10 attempts), so the additional retries here are somewhat redundant, but acceptable for extra reliability in test code.
224-224: Good typo fix.Correcting
rent_excempttorent_exemptimproves code clarity.
346-349: Account fetched before delegation—verify this is intentional.
pda_accis retrieved at line 348 before the delegation transaction (lines 350-376), meaning it won't reflect the delegated state. In contrast,init_and_delegate_account_on_chainfetches the account after delegation (line 297).If callers expect
pda_accto reflect post-delegation state (e.g., for assertions), this ordering may cause test failures or incorrect assumptions.
351-359: Compute budget and delegation transaction setup looks correct.The compute budget of 250,000 CU is set explicitly for the compressed delegation instruction, and the transaction is properly constructed with the payer and signer.
370-376: Good error logging pattern.Using
.inspect_err()before.expect()provides useful debug information (including the transaction signature) when delegation fails, which aids test debugging.
447-449: LGTM!The
#[allow(dead_code)]annotation is appropriate for test utility functions that may not be used in all test configurations.magicblock-committor-service/src/tasks/args_task.rs (7)
31-40: LGTM! Enum extension follows existing patterns.The new compressed variants (
CompressedCommit,CompressedFinalize,CompressedUndelegate) are well-organized alongside their non-compressed counterparts.
56-68: LGTM! Preparation state logic correctly categorizes compressed tasks.Compressed variants appropriately require
PreparationTask::Compressedwhile non-compressed variants need no preparation.
221-227: LGTM! Compressed variants correctly excluded from buffer optimization.Compressed tasks appropriately remain as Args tasks and are not convertible to BufferTask.
253-255: LGTM! Higher compute budget for compressed tasks is appropriate.The 250,000 CU allocation (vs 70,000 for non-compressed) accounts for the additional proof verification complexity in compressed delegation operations.
383-402: LGTM!delegated_account()comprehensively covers all variants.Returns the appropriate pubkey for each task type, correctly returning
Noneonly forBaseActionwhich doesn't operate on a specific delegated account.
405-417: LGTM! Well-documented helper function.Clear extraction of the shared budget calculation with helpful inline comments explaining each account's purpose.
427-431: LGTM! Metric labels follow consistent naming convention.The
args_compressed_*labels align with existing patterns and will enable proper observability for compressed delegation operations.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
f27a8d8 to
fa5e1b2
Compare
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
test-integration/test-committor-service/tests/test_ix_commit_local.rs (1)
368-379: Incomplete handling of compressed modes incommit_book_order_account.The function accepts
CommitAccountModebut only creates non-compressedMagicBaseIntentvariants. IfCompressedCommitis passed, it createsMagicBaseIntent::Commit(notCompressedCommit). Similarly,CompressedCommitAndUndelegatecreatesMagicBaseIntent::CommitAndUndelegate(notCompressedCommitAndUndelegate).Currently no tests call this function with compressed modes, so this may be intentional. If compressed order book commits are out of scope, consider restricting the
modeparameter or adding a guard.💡 Suggested guard to prevent misuse
async fn commit_book_order_account( changed_len: usize, expected_strategy: CommitStrategy, mode: CommitAccountMode, ) { + assert!( + !matches!( + mode, + CommitAccountMode::CompressedCommit + | CommitAccountMode::CompressedCommitAndUndelegate + ), + "Compressed modes are not supported for order book accounts" + ); init_logger!();magicblock-config/src/tests.rs (1)
559-568: Minor inconsistency: mixed qualified and unqualifiedDurationusage.Line 561 uses
std::time::Durationwhile line 568 uses unqualifiedDuration. Both work sinceDurationis imported at line 1, but consider using consistent style throughout.♻️ Suggested fix for consistency
assert_eq!( config.chainlink.resubscription_delay, - std::time::Duration::from_millis(150) + Duration::from_millis(150) );magicblock-chainlink/src/remote_account_provider/chain_updates_client.rs (1)
88-97: LGTM!The
Compressionendpoint handling correctly returnsInvalidPubsubEndpoint, mirroring theRpcendpoint behavior. Both are appropriately rejected as invalid for pubsub client construction since they serve different purposes (Photon indexer queries vs. real-time subscriptions).Consider consolidating the duplicate arms if more non-pubsub endpoint variants are added in the future:
♻️ Optional: Consolidate non-pubsub endpoint handling
- Rpc { .. } => { - Err(RemoteAccountProviderError::InvalidPubsubEndpoint(format!( - "{endpoint:?}" - ))) - } - Compression { .. } => { - Err(RemoteAccountProviderError::InvalidPubsubEndpoint(format!( - "{endpoint:?}" - ))) - } + Rpc { .. } | Compression { .. } => { + Err(RemoteAccountProviderError::InvalidPubsubEndpoint(format!( + "{endpoint:?}" + ))) + }test-integration/test-runner/bin/run_tests.rs (2)
244-301: Inconsistent cleanup for Light validator in table_mania_and_committor_tests.The validator is started with
ValidatorCluster::Light(line 246), but cleanup usescleanup_devnet_only(lines 275, 292, 301). For Light validators,cleanup_light_validatorshould be used instead to ensure thelight test-validator --stopcommand is executed to properly stop the light validator process.🐛 Proposed fix
match run_test(test_table_mania_dir, Default::default()) { Ok(output) => output, Err(err) => { eprintln!("Failed to run table-mania: {:?}", err); - cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); return Err(err.into()); } }match run_test(test_committor_dir, Default::default()) { Ok(output) => output, Err(err) => { eprintln!("Failed to run committor: {:?}", err); - cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); return Err(err.into()); } }- cleanup_devnet_only(&mut devnet_validator); + cleanup_light_validator(&mut devnet_validator, "light"); Ok((table_mania_test_output, committor_test_output))
305-312: Light validator passed in wrong position to wait_for_ctrlc.The
devnet_validatoris started withValidatorCluster::Lightbut is passed as the first argument (devnet position) towait_for_ctrlc. According to the signal.rs signature, light validators should be the third argument to receive proper cleanup viacleanup_light_validator.🐛 Proposed fix
} else { let setup_needed = config.setup_devnet(TABLE_MANIA_TEST) || config.setup_devnet(COMMITTOR_TEST); - let devnet_validator = setup_needed.then(start_devnet_validator); + let light_validator = setup_needed.then(start_devnet_validator); Ok(( - wait_for_ctrlc(devnet_validator, None, None, success_output())?, + wait_for_ctrlc(None, None, light_validator, success_output())?, success_output(), )) }magicblock-committor-service/src/transaction_preparator/mod.rs (1)
114-121: Replace.expect()with proper error handling; dummy lookup table check does not guarantee real table assembly will succeed.The early check (lines 91-99) validates assembly using synthetic dummy lookup tables created from pubkey chunks. However, the actual call (lines 114-121) uses real lookup tables retrieved from the async
prepare_for_delivery()operation. These tables differ fundamentally in structure and state, making the dummy check insufficient to guarantee the real call will succeed. The invariant claimed by the.expect()comment does not hold.Per coding guidelines, production code in
magicblock-*must not use.expect()without explicit justification with proven invariants. Use proper error propagation here instead.magicblock-chainlink/src/remote_account_provider/mod.rs (1)
556-556: Mutex.unwrap()on lock in spawned task.The
fetching_accounts.lock().unwrap()usage is standard Rust practice for mutex poisoning. While the coding guidelines flag.unwrap()as a major issue, mutex lock unwrapping is an accepted pattern since a poisoned mutex indicates a panic occurred while holding the lock, which is typically unrecoverable.However, for consistency with the guideline, you could use
.expect("fetching_accounts lock poisoned")to provide context, similar to the pattern used inCacheTaskInfoFetcherwithMUTEX_POISONED_MSG.
🤖 Fix all issues with AI agents
In `@compressed-delegation-client/src/utils.rs`:
- Around line 8-15: Add a short inline comment in the DataHasher impl for
CompressedDelegationRecord inside the hash(&self) method explaining why hash[0]
= 0 is applied (to force the digest < BN254 scalar modulus per Light Protocol
convention), so future maintainers understand the field-size constraint; place
this comment immediately above or beside the hash[0] = 0 statement within the
hash function.
In `@magicblock-committor-service/src/intent_executor/mod.rs`:
- Around line 329-333: The code uses .any() on strategy.optimized_tasks to set
compressed, which treats mixed batches (some compressed, some not) as
compressed; change this to detect three states by computing let any_compressed =
strategy.optimized_tasks.iter().any(|t| t.is_compressed()); let all_compressed =
strategy.optimized_tasks.iter().all(|t| t.is_compressed()); then replace the
single bool compressed with a tri-state (enum or match on (all_compressed,
any_compressed)) and update downstream logic to handle None/All/Mixed cases
accordingly; if you cannot implement mixed-handling now, replace the TODO with a
short comment and open an issue tracking mixed compressed/non-compressed
behavior and reference the new tri-state variables
(any_compressed/all_compressed/compressed_state) so callers can be updated
later.
In `@magicblock-committor-service/src/intent_executor/task_info_fetcher.rs`:
- Around line 255-312: The method
fetch_compressed_delegation_records_with_retries is misleading because retries
are handled by PhotonIndexer via the RetryConfig passed to
PhotonIndexer::get_multiple_compressed_accounts; rename the function to
fetch_compressed_delegation_records (and update its docstring) to reflect that
it delegates retry behavior, remove "with_retries" from the function name and
update all callers/tests to use the new name, and ensure the RetryConfig usage
(num_retries from max_retries) and parameter list (photon_client:
&PhotonIndexer, pubkeys: &[Pubkey], min_context_slot: u64, max_retries:
NonZeroUsize) remain unchanged so behavior is preserved.
- Around line 183-194: The fetch_accounts_with_retries retry loop is handling
compressed-specific errors that can't occur for the RPC path; remove the
unreachable TaskInfoFetcherError::NoCompressedAccount and
TaskInfoFetcherError::NoCompressedData match arms from
fetch_accounts_with_retries (or, if you intentionally keep them for symmetry,
add a clear comment above the match in fetch_accounts_with_retries explaining
they are only present for parity with compressed paths and will never be
produced by fetch_accounts). Ensure you only keep/handle errors that
fetch_accounts can actually return (e.g., IndexerError, DeserializeError,
LightRpcError, PhotonClientNotFound) and adjust any break Err(err) usages
accordingly so the control flow remains correct.
In `@magicblock-committor-service/src/tasks/args_task.rs`:
- Around line 159-189: The CompressedUndelegate arm in
ArgsTaskType::CompressedUndelegate currently uses
CompressedDelegationRecord::from_bytes(...).expect(...), which can panic;
replace this with proper error handling by validating the
compressed_delegation_record bytes earlier (e.g., during task construction in
the task builder or ArgsTask::new) or by returning/falling back instead of
panicking: parse the bytes with CompressedDelegationRecord::from_bytes and
propagate or map the error into a TaskBuilderError (or log and produce a
no-op/marker Instruction) so that CompressedUndelegateBuilder::args(...)
receives a validated CompressedDelegationRecord (or a safe fallback) and
instruction() can remain infallible.
In
`@magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs`:
- Around line 621-622: Remove the unused CompressedDataNotFound error variant
from the error enum in delivery_preparator.rs and delete any dead imports or
match arms that referenced it; errors from compressed data fetching are instead
wrapped by TaskBuilderError (see TaskBuilderError usage around the compressed
data fetch), so ensure there are no remaining references to
CompressedDataNotFound and run cargo build/tests to verify no usages remain.
In `@magicblock-core/Cargo.toml`:
- Around line 11-15: Reorder the dependency entries so they are alphabetically
sorted: move the "light-compressed-account = { workspace = true }" line to come
before "light-sdk = { workspace = true }" in the Cargo.toml dependencies block,
keeping "compressed-delegation-client", "magicblock-magic-program-api", and
"flume" positions unchanged relative to this swap.
In `@test-integration/programs/schedulecommit/src/api.rs`:
- Around line 407-417: The helper function build_instruction uses unwrap() on
borsh::to_vec which gives no context on panic; replace .unwrap() with
.expect("Serialization of instruction should never fail") to match the codebase
pattern and provide a descriptive panic message when serializing
ScheduleCommitInstruction fails (update the call inside
Instruction::new_with_bytes in build_instruction).
In `@test-integration/test-chainlink/src/ixtest_context.rs`:
- Around line 508-509: Replace the hardcoded sleep_ms(1500).await with a
configurable value: introduce a constant or test-config variable (e.g.,
INDEXER_SYNC_MS or TEST_INDEXER_DELAY_MS) and use that symbol in place of 1500;
alternatively read from an env var or test config
(std::env::var("INDEXER_SYNC_MS").parse().unwrap_or(DEFAULT_MS)) so tests can
adjust delay per environment, and update any test helper functions (sleep_ms
call sites) to reference this new symbol.
In `@test-integration/test-chainlink/tests/ix_01_ensure-accounts.rs`:
- Around line 108-111: Add a brief inline comment explaining the boolean literal
passed to delegate_compressed_counter so readers know its purpose; locate the
call chain on ctx
(ctx.init_counter(&counter_auth).await.delegate_compressed_counter(&counter_auth,
false).await) and annotate the boolean (false) with a short comment like "//
<purpose>" referencing the intended behavior (e.g., disable delegation,
non-compressed flag, etc.) to make the intent clear.
In `@test-integration/test-committor-service/tests/test_ix_commit_local.rs`:
- Around line 646-679: The test names are misleading: functions
test_ix_commit_two_compressed_accounts_512kb,
test_ix_commit_three_compressed_accounts_512kb, and
test_ix_commit_six_compressed_accounts_512kb use 512 bytes but say "512kb";
rename them to test_ix_commit_*_512_bytes (e.g.,
test_ix_commit_two_compressed_accounts_512_bytes) and update any internal
references or calls in this file to match; keep the test bodies
(commit_multiple_accounts(...) calls and expect_strategies usage) unchanged.
In `@test-integration/test-runner/src/cleanup.rs`:
- Around line 26-38: In cleanup_light_validator, the stop failure message uses a
hardcoded "light validator" instead of the provided label; update the panic in
the command failure branch to include the label parameter (e.g., panic!("Failed
to stop {} validator: {:?}", label, command)) so both kill and stop error
messages consistently reference the same label; locate the function
cleanup_light_validator and replace the hardcoded string in the second panic
accordingly.
In `@test-manual/helius-laser/sh/05_run-laser-test.sh`:
- Around line 1-3: Restore manifest-path resolution so the script can be run
from anywhere: determine the script directory using ${BASH_SOURCE[0]} (or $0) at
runtime, compute the crate root as the script directory's parent (the helm
helius-laser crate folder), and invoke Cargo with --manifest-path pointing to
that parent directory's Cargo.toml while preserving RUST_LOG and --bin
helius-laser.
♻️ Duplicate comments (3)
magicblock-committor-program/src/instruction_builder/mod.rs (1)
11-21: The.expect()usage was previously flagged.This concern was raised in a prior review. The author noted that the previous
Instruction::new_with_borshalso uses.unwrap()internally, making this behavior equivalent to the original code path. While the coding guidelines flag.expect()in production code as a major concern, the risk here is mitigated by the fact thatCommittorInstructionis a well-defined enum with known serialization behavior.If addressing this in the future, consider returning
Result<Instruction, ...>to fully propagate errors, though this would require changes to all call sites.programs/magicblock/src/utils/instruction_utils.rs (1)
368-379: PDAs incorrectly marked as signers for non-undelegation paths.As previously noted, line 377 marks PDAs with
signer=true, but PDAs cannot sign transactions. The non-undelegation branch should useAccountMeta::new_readonly(*pubkey, false)to align with the processor's documented design and the undelegation path pattern.🔧 Suggested fix
for pubkey in &pdas { if is_undelegation { account_metas.push(AccountMeta::new(*pubkey, false)); } else { - account_metas.push(AccountMeta::new_readonly(*pubkey, true)); + account_metas.push(AccountMeta::new_readonly(*pubkey, false)); } }compressed-delegation-client/src/lib.rs (1)
1-7: File-level allow attributes noted as temporary.Per the previous discussion, the crate-level
#![allow(deprecated)]and#![allow(unused_imports)]are acceptable since this crate will be removed once the program is public.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test-integration/test-committor-service/tests/test_ix_commit_local.rs (1)
368-379: Compressed modes not fully handled incommit_book_order_account.The function accepts
CommitAccountModebut the mode matching only distinguishes between undelegate vs non-undelegate. IfCompressedCommitis passed, it falls through toMagicBaseIntent::Commit(non-compressed). Additionally,photon_clientisNone(line 345), which would cause issues for compressed paths.Since all current callers use
CommitAccountMode::Commit, this isn't actively broken, but the function signature is misleading.Consider either:
- Restricting the parameter type to only accept non-compressed modes.
- Fully implementing compressed mode support with proper PhotonIndexer initialization.
♻️ Option 1: Restrict to non-compressed modes only
async fn commit_book_order_account( changed_len: usize, expected_strategy: CommitStrategy, - mode: CommitAccountMode, + undelegate: bool, ) {Then update the match to use the boolean flag, making the intent clearer.
🤖 Fix all issues with AI agents
In `@compressed-delegation-client/src/utils.rs`:
- Around line 8-15: The implementation of DataHasher for
CompressedDelegationRecord currently takes a generic parameter hash<H: Hasher>
but always uses Sha256::hash; update the code by adding a concise comment above
the impl or the hash method stating that the H generic is required by the
DataHasher trait but is intentionally unused because the Light Protocol mandates
SHA-256 with the first byte zeroed to produce a 254-bit value compatible with
BN254 field elements (hence the hash[0] = 0), and explicitly note that the
signature must remain unchanged to satisfy the trait.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
compressed-delegation-client/src/utils.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rsmagicblock-core/Cargo.tomltest-integration/programs/schedulecommit/src/api.rstest-integration/test-committor-service/tests/test_ix_commit_local.rs
🧰 Additional context used
📓 Path-based instructions (2)
{test-*,tools}/**
⚙️ CodeRabbit configuration file
Usage of
.unwrap()or.expect()in test code is acceptable and may be treated as trivial.
Files:
test-integration/test-committor-service/tests/test_ix_commit_local.rstest-integration/programs/schedulecommit/src/api.rs
{magicblock-*,programs,storage-proto}/**
⚙️ CodeRabbit configuration file
{magicblock-*,programs,storage-proto}/**: Treat any usage of.unwrap()or.expect()in production Rust code as a MAJOR issue.
These should not be categorized as trivial or nit-level concerns.
Request proper error handling or explicit justification with invariants.
Files:
magicblock-core/Cargo.tomlmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rs
🧠 Learnings (12)
📓 Common learnings
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-chainlink/tests/ix_01_ensure-accounts.rs:85-114
Timestamp: 2025-11-19T11:31:24.218Z
Learning: In the magicblock-validator codebase, compressed delegation records are never subscribed to by design. Tests for compressed delegation flows should focus on verifying that the delegated account (e.g., counter PDA) is cloned as delegated and not subscribed, but do not need to check for subscription to delegation-record-like PDAs in the compressed path because subscriptions to compressed delegation records never occur.
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/tests/04_redeleg_other_separate_slots.rs:158-165
Timestamp: 2025-11-18T08:47:39.702Z
Learning: In magicblock-chainlink tests involving compressed accounts, `set_remote_slot()` sets the slot of the `AccountSharedData`, while `compressed_account_shared_with_owner_and_slot()` sets the slot of the delegation record. These are two different fields and both calls are necessary.
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-committor-service/tests/test_ix_commit_local.rs:867-881
Timestamp: 2025-12-03T09:33:48.707Z
Learning: Repo: magicblock-labs/magicblock-validator PR: 639
Context: test-integration/test-committor-service/tests/test_ix_commit_local.rs (ix_commit_local)
Learning: The PhotonIndexer used for compressed account fetches (get_compressed_account) has built‑in retry logic (defaults to ~10 attempts), so tests should not add separate retry loops around compressed fetches unless there’s a specific need.
📚 Learning: 2025-12-03T09:33:48.707Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-committor-service/tests/test_ix_commit_local.rs:867-881
Timestamp: 2025-12-03T09:33:48.707Z
Learning: Repo: magicblock-labs/magicblock-validator PR: 639
Context: test-integration/test-committor-service/tests/test_ix_commit_local.rs (ix_commit_local)
Learning: The PhotonIndexer used for compressed account fetches (get_compressed_account) has built‑in retry logic (defaults to ~10 attempts), so tests should not add separate retry loops around compressed fetches unless there’s a specific need.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rsmagicblock-core/Cargo.tomltest-integration/programs/schedulecommit/src/api.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rs
📚 Learning: 2025-11-18T08:47:39.702Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/tests/04_redeleg_other_separate_slots.rs:158-165
Timestamp: 2025-11-18T08:47:39.702Z
Learning: In magicblock-chainlink tests involving compressed accounts, `set_remote_slot()` sets the slot of the `AccountSharedData`, while `compressed_account_shared_with_owner_and_slot()` sets the slot of the delegation record. These are two different fields and both calls are necessary.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rs
📚 Learning: 2025-10-21T14:00:54.642Z
Learnt from: bmuddha
Repo: magicblock-labs/magicblock-validator PR: 578
File: magicblock-aperture/src/requests/websocket/account_subscribe.rs:18-27
Timestamp: 2025-10-21T14:00:54.642Z
Learning: In magicblock-aperture account_subscribe handler (src/requests/websocket/account_subscribe.rs), the RpcAccountInfoConfig fields data_slice, commitment, and min_context_slot are currently ignored—only encoding is applied. This is tracked as technical debt in issue `#579`: https://github.com/magicblock-labs/magicblock-validator/issues/579
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rsmagicblock-core/Cargo.tomltest-integration/programs/schedulecommit/src/api.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rs
📚 Learning: 2025-11-19T11:31:24.218Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: test-integration/test-chainlink/tests/ix_01_ensure-accounts.rs:85-114
Timestamp: 2025-11-19T11:31:24.218Z
Learning: In the magicblock-validator codebase, compressed delegation records are never subscribed to by design. Tests for compressed delegation flows should focus on verifying that the delegated account (e.g., counter PDA) is cloned as delegated and not subscribed, but do not need to check for subscription to delegation-record-like PDAs in the compressed path because subscriptions to compressed delegation records never occur.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rscompressed-delegation-client/src/utils.rs
📚 Learning: 2025-12-03T09:36:01.527Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-chainlink/src/remote_account_provider/mod.rs:1350-1353
Timestamp: 2025-12-03T09:36:01.527Z
Learning: Repo: magicblock-labs/magicblock-validator
File: magicblock-chainlink/src/remote_account_provider/mod.rs
Context: consolidate_fetched_remote_accounts
Learning: For unexpected result counts (>2), the project prefers logging an error and returning an empty Vec over panicking; acceptable during development per maintainer (Dodecahedr0x).
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rs
📚 Learning: 2025-11-26T13:27:46.702Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs:172-193
Timestamp: 2025-11-26T13:27:46.702Z
Learning: The Photon indexer (PhotonIndexer from light-client) handles slot validation internally when an IndexerRpcConfig with a slot constraint is provided. Additional slot validation at the call site is not needed when fetching compressed data via get_compressed_data.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rs
📚 Learning: 2025-11-24T14:21:00.996Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 639
File: Cargo.toml:58-58
Timestamp: 2025-11-24T14:21:00.996Z
Learning: In the magicblock-validator codebase, magicblock-api/Cargo.toml intentionally uses borsh = "1.5.3" (instead of the workspace version 0.10.4) because it needs to deserialize types from the magic-domain-program external dependency, which requires borsh 1.5.x compatibility. This is an intentional exception for interoperability with the magic domain program.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rsmagicblock-core/Cargo.toml
📚 Learning: 2025-11-21T10:22:07.520Z
Learnt from: taco-paco
Repo: magicblock-labs/magicblock-validator PR: 661
File: magicblock-committor-service/src/intent_executor/single_stage_executor.rs:20-28
Timestamp: 2025-11-21T10:22:07.520Z
Learning: In magicblock-committor-service's SingleStageExecutor and TwoStageExecutor (single_stage_executor.rs and two_stage_executor.rs), the fields transaction_strategy, junk, and patched_errors are intentionally public because these executors are designed to be used independently outside of the IntentExecutor scope, and callers need access to these execution reports for cleanup and error handling.
Applied to files:
test-integration/test-committor-service/tests/test_ix_commit_local.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rs
📚 Learning: 2025-11-07T18:19:48.996Z
Learnt from: Dodecahedr0x
Repo: magicblock-labs/magicblock-validator PR: 614
File: test-integration/programs/schedulecommit/Cargo.toml:19-20
Timestamp: 2025-11-07T18:19:48.996Z
Learning: In Solana programs (crates depending on solana-program), the empty feature flags `custom-heap = []` and `custom-panic = []` are standard declarations required to avoid compiler warnings. These should not be flagged as unused features in Cargo.toml files for Solana program crates.
Applied to files:
magicblock-core/Cargo.toml
📚 Learning: 2025-11-07T13:09:52.253Z
Learnt from: bmuddha
Repo: magicblock-labs/magicblock-validator PR: 589
File: test-kit/src/lib.rs:275-0
Timestamp: 2025-11-07T13:09:52.253Z
Learning: In test-kit, the transaction scheduler in ExecutionTestEnv is not expected to shut down during tests. Therefore, using `.unwrap()` in test helper methods like `schedule_transaction` is acceptable and will not cause issues in the test environment.
Applied to files:
test-integration/programs/schedulecommit/src/api.rs
📚 Learning: 2025-11-07T14:20:31.457Z
Learnt from: thlorenz
Repo: magicblock-labs/magicblock-validator PR: 621
File: magicblock-chainlink/src/remote_account_provider/chain_pubsub_actor.rs:457-495
Timestamp: 2025-11-07T14:20:31.457Z
Learning: In magicblock-chainlink/src/remote_account_provider/chain_pubsub_client.rs, the unsubscribe closure returned by PubSubConnection::account_subscribe(...) resolves to () (unit), not a Result. Downstream code should not attempt to inspect an unsubscribe result and can optionally wrap it in a timeout to guard against hangs.
Applied to files:
test-integration/programs/schedulecommit/src/api.rs
🧬 Code graph analysis (3)
test-integration/test-committor-service/tests/test_ix_commit_local.rs (2)
magicblock-core/src/compression/mod.rs (1)
derive_cda_from_pda(14-27)compressed-delegation-client/src/generated/accounts/compressed_delegation_record.rs (1)
from_bytes(39-42)
test-integration/programs/schedulecommit/src/api.rs (1)
magicblock-committor-program/src/instruction_builder/mod.rs (1)
build_instruction(11-21)
magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs (4)
magicblock-committor-service/src/tasks/args_task.rs (4)
get_compressed_data(384-397)strategy(317-319)delegated_account(399-418)from(53-55)magicblock-committor-service/src/tasks/buffer_task.rs (4)
get_compressed_data(239-243)strategy(199-201)delegated_account(245-254)from(98-106)magicblock-committor-service/src/tasks/task_builder.rs (1)
get_compressed_data(541-597)magicblock-committor-service/src/utils.rs (1)
persist_status_update(6-22)
🔇 Additional comments (17)
magicblock-core/Cargo.toml (1)
11-15: LGTM!The new workspace dependencies appropriately support the compression feature:
compressed-delegation-clientfor the delegation APIlight-compressed-accountandlight-sdkfor Light Protocol integrationmagicblock-magic-program-apifor magic program interfacestokiowithsyncfeature for async synchronization primitivesDependencies are now in correct alphabetical order.
Also applies to: 28-28
test-integration/programs/schedulecommit/src/api.rs (2)
407-418: LGTM! Clean helper that aligns with codebase patterns.The implementation follows the same pattern as
magicblock-committor-program/src/instruction_builder/mod.rs, including the descriptive.expect()message. This centralizes instruction construction and addresses the previous review feedback.
44-48: LGTM! Mechanical refactor to use the centralized helper.All instruction builders consistently use the new
build_instructionhelper, improving maintainability by centralizing the serialization logic.Also applies to: 64-68, 85-89
test-integration/test-committor-service/tests/test_ix_commit_local.rs (7)
58-63: LGTM! Clean enum design for commit modes.The
CommitAccountModeenum cleanly captures all four combinations of commit/undelegate × compressed/non-compressed. This provides good extensibility for test scenarios.
237-238: LGTM! PhotonIndexer integration looks correct.The PhotonIndexer is properly instantiated with
PHOTON_URLand wrapped inArcfor shared ownership. Based on learnings, PhotonIndexer has built-in retry logic, so no additional retry handling is needed.
602-679: Good test coverage for compressed accounts.The new compressed account tests cover:
- Single account commits at different sizes (100, 500 bytes)
- Undelegate scenarios
- Multiple account commits (2, 3, 6 accounts)
The consistent use of
CommitStrategy::StateArgsfor compressed accounts aligns with the expected behavior.
1010-1046: LGTM! Compressed account validation path is well-structured.The compressed path correctly:
- Validates the on-chain PDA is owned by
compressed_delegation_client::ID- Derives the CDA address using
derive_cda_from_pda- Fetches the compressed account via PhotonIndexer (which has built-in retry logic per learnings)
- Validates the compressed account data matches expectations
The separation between compressed and non-compressed validation paths is clean.
1197-1245: LGTM! Validation logic is correct.The
validate_compressed_accountfunction properly:
- Deserializes
CompressedDelegationRecordfrom the compressed account data- Validates data content, lamports, and owner match expectations
- Provides detailed trace logging for debugging mismatches
Note: Unlike
validate_account, this doesn't takeremaining_triessince it's called after PhotonIndexer's built-in retry logic has already handled retries.
778-820: LGTM! Bundle creation correctly handles compressed accounts.The
create_bundlesfunction cleanly branches on thecompressedflag to use the appropriate initialization function while maintaining the same bundle structure output.
859-881: LGTM! All commit modes handled correctly.The match expression exhaustively handles all four
CommitAccountModevariants, creating the appropriateMagicBaseIntentfor each case. This is the correct pattern for extending functionality while maintaining type safety.compressed-delegation-client/src/utils.rs (1)
5-6: LGTM!The discriminator constant (
"MAGICBLK"in ASCII) follows the standard Anchor pattern for account identification.magicblock-committor-service/src/transaction_preparator/delivery_preparator.rs (6)
1-42: LGTM!The imports are well-organized and all newly added types (
Arc,PhotonIndexer,IndexerRpcConfig,CommitSlotFn,get_compressed_data,TaskBuilderError) are used appropriately in the implementation below.
64-89: LGTM!The updated signature correctly propagates
photon_clientandcommit_slot_fnthrough the preparation pipeline. Cloningcommit_slot_fnfor each task iteration is the correct approach since each async block needs its own copy.
125-166: LGTM!The Buffer preparation flow is well-structured with clear status progression and proper error propagation. The persistence of intermediate states (PartiallyInitialized → Initialized → FullyInitialized) enables recovery in failure scenarios.
241-255: Confirm that Compressed tasks should not have retry-on-error behavior.The error recovery logic (cleanup and retry) only applies to
Buffertasks. If aCompressedtask fails, the error propagates directly without retry. This asymmetry appears intentional since compressed data fetch failures (indexer errors, missing data) likely aren't recoverable via local cleanup—but please confirm this is the intended behavior.
269-273: LGTM!The updated signatures that accept
&BufferPreparationTaskdirectly align with the refactoredprepare_taskimplementation. This provides better type safety by ensuring the buffer-specific methods only receive buffer-related data.
629-645: LGTM!The new
InternalErrorvariants comprehensively cover the failure modes introduced by compressed data handling:
TransactionCreationError/TransactionSigningErrorfor transaction building failuresDelegatedAccountNotFound/PhotonClientNotFoundfor missing prerequisitesTaskBuilderErrorfor compressed data fetch failuresThe
#[from]attributes enable ergonomic error propagation with?.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs (1)
304-496: Setconfinedwhen compressed delegation authority is default.
The compressed delegation path only setsdelegated, soPubkey::default()authority isn’t marked confined, diverging from non-compressed delegation handling.Proposed fix
let is_delegated_to_us = delegation_record.authority.eq(&self.validator_pubkey); + let is_confined = + delegation_record.authority.eq(&Pubkey::default()); account.set_delegated(is_delegated_to_us); + account.set_confined(is_confined);
🤖 Fix all issues with AI agents
In `@magicblock-chainlink/src/remote_account_provider/mod.rs`:
- Around line 1176-1183: The code currently uses
fetching_accounts.lock().unwrap() which can panic on a poisoned mutex; update
the block that acquires the lock (where fetching_accounts is used to create the
mutable fetching variable and later call fetching.remove(pubkey)) to handle a
poisoned lock by matching on fetching_accounts.lock() and either recovering with
poison.into_inner() or returning an Err/early return, then proceed to
remove(pubkey) from the obtained mutable fetching map; ensure you reference the
variables and methods fetching_accounts, fetching, lock(), remove(pubkey) and
avoid unwrap/expect so the code continues or fails gracefully instead of
panicking.
- Around line 1132-1143: The success metrics are currently emitted whenever
photon_client.is_some(), which logs compressed success even if no compressed
fetch ran or it failed; update the guard around the metrics (the block that
calls inc_compressed_account_fetches_success,
inc_compressed_account_fetches_found, inc_compressed_account_fetches_not_found)
to only run when a compressed fetch actually occurred — e.g., require
compressed_accounts_count > 0 and that at least one of compressed_found_count or
compressed_not_found_count is non‑zero (or use an explicit photon fetch success
flag if available) in addition to photon_client.is_some(); locate these symbols
(photon_client, compressed_accounts_count, compressed_found_count,
compressed_not_found_count, and the inc_compressed_* functions) and change the
if condition to include those checks.
In `@magicblock-committor-service/src/tasks/task_builder.rs`:
- Around line 326-340: The async closure over committed_accounts borrows the
closure argument which is fragile; change the mapping to capture cloned data
with an async move so each future owns its input: clone the account or its
pubkey before creating the future (e.g. let account_clone = account.clone() or
let pubkey = account.pubkey.clone()), then use async move {
Ok(Some(get_compressed_data(&account_clone.pubkey (or &pubkey), photon_client,
None).await?)) } so get_compressed_data futures do not borrow from the loop
variable and match the pattern used in commit_tasks.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
magicblock-chainlink/src/remote_account_provider/mod.rs (1)
1224-1233: Avoid panic on poisoned mutex infetch_from_rpc.
fetching_accounts.lock().unwrap()can panic in production. Handle poisoning explicitly to avoid crashing the background fetch task. As per coding guidelines, avoid unwrap/expect in production paths.🔧 Proposed fix
- let mut fetching = fetching_accounts.lock().unwrap(); + let mut fetching = match fetching_accounts.lock() { + Ok(guard) => guard, + Err(poisoned) => { + error!( + "fetching_accounts lock poisoned; continuing with inner state: {poisoned:?}" + ); + poisoned.into_inner() + } + };
🤖 Fix all issues with AI agents
In `@magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs`:
- Around line 747-754: The failure path from
pipeline::resolve_compressed_delegated_accounts currently returns early and
leaks watchers; modify the error handling around the await so that on Err you
run the same cleanup used elsewhere to cancel/unsubscribe the newly created
subscriptions (e.g., invoke the module/function that tears down watchers — the
code that unsubscribes new_subscriptions or calls
cancel_new_subscriptions/unsubscribe_new_subscriptions) before returning the
error; place this cleanup immediately after the await error is detected so
compressed_delegated_accounts resolution does not leak watchers.
- Around line 485-498: When constructing the compressed delegation branch, also
mark the account as confined so authority==default accounts are treated as
confined; after the existing account.set_delegated(is_delegated_to_us) call in
the compressed delegation handling, call account.set_confined(true) (or
otherwise set the confined flag on the Account type) when the
delegation_record.authority is the default authority (i.e., authority ==
Pubkey::default() / zero key) so that the account and the returned
DelegationRecord are consistent.
♻️ Duplicate comments (3)
test-integration/test-committor-service/tests/test_ix_commit_local.rs (1)
840-841: PhotonIndexer created unconditionally for all modes.Unlike
commit_single_account(lines 237-245), this function always creates the PhotonIndexer even for non-compressed modes where it won't be used. Consider applying the same conditional pattern for consistency:let photon_client = if matches!( mode_all, CommitAccountMode::CompressedCommit | CommitAccountMode::CompressedCommitAndUndelegate ) { Some(Arc::new(PhotonIndexer::new(PHOTON_URL.to_string(), None))) } else { None };magicblock-committor-service/src/intent_executor/single_stage_executor.rs (1)
64-68: Consider making theNonetype explicit for clarity.The
Noneargument at line 67 lacks an explicit type annotation. For consistency with the pattern used later in this file (lines 223, whereNone::<CommitSlotFn>is used), consider adding the explicit type here as well.♻️ Optional: Add explicit type annotation
.prepare_and_execute_strategy( &mut self.transaction_strategy, persister, - None, + None::<CommitSlotFn>, )magicblock-chainlink/src/remote_account_provider/mod.rs (1)
1125-1130: Use compressed totals for success metric count.
inc_compressed_account_fetches_successshould reflect compressed totals rather than all pubkeys to avoid inflating metrics.🔧 Proposed fix
- inc_compressed_account_fetches_success(pubkeys.len() as u64); + inc_compressed_account_fetches_success(compressed_total);
taco-paco
left a comment
There was a problem hiding this comment.
Initial review, let's address comments and I will give it another go since I need clarifications for some implementation details.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@magicblock-chainlink/tests/05_redeleg_other_same_slot.rs`:
- Around line 183-194: The call to ctx.send_and_receive_account_update uses a
hardcoded Some(400) for the update slot which mismatches the account/delegation
record slot variable; change the update slot argument to Some(slot) so the
delegation update uses the same slot as delegated_acc (refer to
send_and_receive_account_update, delegated_acc, slot, pubkey) ensuring "same
slot" semantics are preserved.
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
magicblock-committor-service/src/intent_executor/two_stage_executor.rs (1)
225-234:⚠️ Potential issue | 🟠 MajorPass the recovered commit signature into this finalize retry.
UnfinalizedAccountErroralready carries the signature of the commit that needs finalization. PassingNonehere drops the same context the normal finalize path now threads viaSome(self.state.commit_signature), so this recovery branch can prepare the retry without the slot/signature metadata it needs.🛠️ Proposed fix
inner .prepare_and_execute_strategy( &mut TransactionStrategy { optimized_tasks: vec![finalize_task], lookup_tables_keys: vec![], compressed: task.is_compressed(), }, &None::<IntentPersisterImpl>, - None, + failed_signature.clone(), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/intent_executor/two_stage_executor.rs` around lines 225 - 234, The retry call to inner.prepare_and_execute_strategy is dropping the recovered commit signature by passing None as the final argument; extract the signature carried by the UnfinalizedAccountError (the recovered commit signature) and pass it into prepare_and_execute_strategy instead of None so the TransactionStrategy retry has the slot/signature metadata it needs (i.e., replace the trailing None with Some(recovered_commit_signature) when calling prepare_and_execute_strategy).magicblock-committor-service/src/intent_executor/single_stage_executor.rs (1)
216-223:⚠️ Potential issue | 🟠 MajorPass the original signature into the finalize retry.
Line 223 still calls
prepare_and_execute_strategy(..., None)even thoughUnfinalizedAccountErroralready carriedfailed_signature. The new preparator parameter exists to rebuild compressed finalize work with commit context; dropping it here breaks the compressed retry path after a partially committed single-stage transaction. At minimum, forward*failed_signaturewhentask.is_compressed().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/intent_executor/single_stage_executor.rs` around lines 216 - 223, The finalize retry currently calls prepare_and_execute_strategy(..., None) which drops the original signature carried by UnfinalizedAccountError; update the call to forward the original failed signature into the new preparator parameter so compressed retries can rebuild finalize work with commit context. Specifically, when invoking prepare_and_execute_strategy for the single-stage finalize path, pass the preparator that includes the original failed_signature (at minimum forward *failed_signature when task.is_compressed()), so TransactionStrategy/prepare logic can reconstruct compressed finalize work rather than using None.magicblock-committor-service/src/intent_executor/mod.rs (1)
425-449:⚠️ Potential issue | 🟠 MajorCPI-limit fallback is not updated for
CompressedCommitAndFinalize.Even with the new
compressedfield on the derived strategies, Line 415 still partitions onlyTaskType::Commitinto the commit stage.TaskType::CompressedCommitAndFinalizewill therefore fall intofinalize_stage_tasks, so a compressed retry can run an empty commit stage and a finalize stage that still contains commit work. Fail fast for compressed tasks here, or translate them into a real two-stage pair before retrying.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/intent_executor/mod.rs` around lines 425 - 449, The commit/finalize partitioning doesn't treat TaskType::CompressedCommitAndFinalize correctly, so compressed tasks end up only in finalize_stage_tasks; update the logic that builds commit_stage_tasks and finalize_stage_tasks (the code that feeds TransactionStrategy/commit_strategy and finalize_strategy) to either (A) treat TaskType::CompressedCommitAndFinalize as a two-stage pair by placing its commit portion into commit_stage_tasks and its finalize portion into finalize_stage_tasks (preserving strategy.compressed), or (B) detect compressed tasks here and fail fast (return an error) before constructing TransactionStrategy; ensure TransactionStrategy instances (commit_strategy and finalize_strategy) reflect the corrected partitioning for compressed tasks so retries do not run an empty commit stage and a commit-containing finalize stage.magicblock-committor-service/src/tasks/task_strategist.rs (1)
214-227:⚠️ Potential issue | 🟠 MajorValidate compression before persisting strategy state.
PersistorVisitorruns beforeTransactionStrategy::try_new()in both branches. If the new compression check returnsInconsistentTaskCompression,build_strategy()now errors after already persisting a strategy that was never actually built.Suggested fix
if Self::try_optimize_tx_size_if_needed(&mut tasks)? <= MAX_ENCODED_TRANSACTION_SIZE { + let strategy = TransactionStrategy::try_new(tasks, vec![])?; // Persist tasks strategy if let Some(persistor) = persistor { let mut persistor_visitor = PersistorVisitor { persistor, context: PersistorContext::PersistStrategy { uses_lookup_tables: false, }, }; - tasks + strategy + .optimized_tasks .iter() .for_each(|task| task.visit(&mut persistor_visitor)); } - - TransactionStrategy::try_new(tasks, vec![]) + Ok(strategy) } @@ else if Self::attempt_lookup_tables(&tasks) { + let lookup_tables_keys = + Self::collect_lookup_table_keys(validator, &tasks); + let strategy = + TransactionStrategy::try_new(tasks, lookup_tables_keys)?; // Persist tasks strategy let mut persistor_visitor = PersistorVisitor { persistor, context: PersistorContext::PersistStrategy { uses_lookup_tables: true, }, }; - tasks + strategy + .optimized_tasks .iter() .for_each(|task| task.visit(&mut persistor_visitor)); - - // Get lookup table keys - let lookup_tables_keys = - Self::collect_lookup_table_keys(validator, &tasks); - TransactionStrategy::try_new(tasks, lookup_tables_keys) + Ok(strategy) } else {Also applies to: 232-246
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/tasks/task_strategist.rs` around lines 214 - 227, The persistor is currently running before the strategy is actually built so InconsistentTaskCompression errors can occur after a strategy was persisted; change the flow to validate/build the strategy first (call TransactionStrategy::try_new or the build_strategy validation that can return InconsistentTaskCompression) and only if that succeeds run the PersistorVisitor (the PersistorVisitor struct with PersistorContext::PersistStrategy); in short, perform TransactionStrategy::try_new/build validation before invoking the persistor and move the persistor invocation to after the successful strategy creation.test-integration/programs/flexi-counter/src/processor.rs (1)
439-486:⚠️ Potential issue | 🔴 CriticalAdd the callee program account to each CPI invocation.
These calls construct instructions targeting
magic_program_infoorsystem_programbut omit the executable program account from theAccountInfoslice passed toinvoke/invoke_signed. The Solana runtime requires all accounts listed in an instruction's account metadata to be present in the invocation; omitting the program account will cause these transactions to fail at runtime.Affected locations:
process_schedule_task(line 481): missingmagic_program_infoprocess_cancel_task(line 509): missingmagic_program_infoprocess_schedule_commit_compressed(line 716): missingmagic_programprocess_external_undelegate_compressed(lines 746, 760): missingsystem_program🔧 Suggested fix
invoke_signed( &ix, - &[payer_info.clone(), counter_pda_info.clone()], + &[ + magic_program_info.clone(), + payer_info.clone(), + counter_pda_info.clone(), + ], &[&seeds], )?; @@ - invoke(&ix, std::slice::from_ref(payer_info))?; + invoke(&ix, &[magic_program_info.clone(), payer_info.clone()])?; @@ - let account_refs = - vec![payer.clone(), magic_context.clone(), counter.clone()]; + let account_refs = vec![ + magic_program.clone(), + payer.clone(), + magic_context.clone(), + counter.clone(), + ]; @@ - let [payer, delegated_account, _system_program] = accounts else { + let [payer, delegated_account, system_program] = accounts else { return Err(ProgramError::NotEnoughAccountKeys); }; @@ &system_instruction::transfer( payer.key, delegated_account.key, args.delegation_record.lamports - delegated_account.lamports(), ), - &[payer.clone(), delegated_account.clone()], + &[payer.clone(), delegated_account.clone(), system_program.clone()], )?; @@ &system_instruction::transfer( delegated_account.key, payer.key, delegated_account.lamports() - args.delegation_record.lamports, ), - &[delegated_account.clone(), payer.clone()], + &[delegated_account.clone(), payer.clone(), system_program.clone()], &[&seeds], )?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-integration/programs/flexi-counter/src/processor.rs` around lines 439 - 486, The CPI invocations build Instructions with AccountMeta referencing the callee program (e.g., Instruction::new_with_bytes targeting magic_program_info or system_program) but then call invoke/invoke_signed without including that program's AccountInfo in the account slice; add the corresponding program AccountInfo (magic_program_info for Magic CPIs, system_program_info for system CPIs) to the slice passed to invoke/invoke_signed so the AccountInfos match the Instruction's AccountMeta. Update the calls in process_schedule_task (where ix targets magic_program_info and invoke_signed currently passes &[payer_info, counter_pda_info]), process_cancel_task (same), process_schedule_commit_compressed (add magic_program), and process_external_undelegate_compressed (add system_program) to include the program AccountInfo in the same order/position expected by the Instruction.
♻️ Duplicate comments (6)
Cargo.toml (1)
90-94: 🧹 Nitpick | 🔵 TrivialConsider using branch reference instead of commit hash for light-protocol dependencies.
Pinning to commit
849e6aaprevents automatic updates when the branch is updated. Per repository conventions, branch references are preferred to allow automatic updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 90 - 94, The five git dependencies (light-client, light-compressed-account, light-hasher, light-sdk, light-sdk-types) are pinned to a commit hash (rev = "849e6aa"); replace each rev entry with a branch reference (e.g., branch = "main" or the project’s canonical branch name) so Cargo will follow branch updates, then run cargo update to refresh lockfile; ensure you change the entries for the exact dependency keys named above.magicblock-committor-service/src/intent_executor/mod.rs (1)
341-347:⚠️ Potential issue | 🟠 MajorDon't carry a single compression flag through recovery.
Line 343 refreshes commit IDs for every committed pubkey with
strategy.compressed, and the rebuilt strategies in these helpers keep copying that same flag forward. ATransactionStrategycan still contain both regular and compressed tasks, so one subset will be re-fetched/prepared against the wrong source. Recovery needs to derive compression per task/group instead of assuming the whole strategy is homogeneous.Also applies to: 369-372, 391-395, 472-481
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/intent_executor/mod.rs` around lines 341 - 347, The code incorrectly propagates a single strategy.compressed flag through recovery (see task_info_fetcher.fetch_next_commit_ids uses strategy.compressed and rebuilt TransactionStrategy copies it), causing mixed compressed/regular tasks to be fetched/prepared against the wrong source; fix by deriving compression on a per-task or per-pubkey basis: group committed_pubkeys (or tasks) by their compression property, call fetch_next_commit_ids for each group with the correct compressed flag, and when rebuilding TransactionStrategy ensure the compressed field is set per task/group rather than reusing the original strategy.compressed; update the helpers that rebuild strategies (the ones around the existing calls at the noted ranges) to assign compression per task and to merge results from each group.magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs (1)
688-695:⚠️ Potential issue | 🟠 MajorMake compressed account resolution fallible here.
This is the only account-resolution branch that consumes a plain
Vec, so Photon fetch/parse failures cannot reach the caller. A partial compressed fetch can therefore returnOk(...)with requested compressed pubkeys missing fromaccounts_to_clone, and the later cancel strategy will treat them as “not cloned”.🔧 Suggested direction
- let compressed_delegated_accounts = - compression::resolve_compressed_delegated_accounts( - self, - owned_by_deleg_compressed, - ) - .await; + let compressed_delegated_accounts = + compression::resolve_compressed_delegated_accounts( + self, + owned_by_deleg_compressed, + ) + .await?; accounts_to_clone.extend(compressed_delegated_accounts);Update
magicblock-chainlink/src/chainlink/fetch_cloner/compression.rsto returnChainlinkResult<Vec<AccountCloneRequest>>(or explicitly report skipped pubkeys) so compressed resolution failures are not silent.Verify the helper signature and whether it logs/drops failures internally. If it returns
Vec<_>or uses log-and-skip paths, this concern is confirmed.#!/bin/bash set -euo pipefail echo "=== call site ===" sed -n '684,696p' magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs echo echo "=== helper signature ===" rg -n --type rust -C3 '\bfn\s+resolve_compressed_delegated_accounts\b' magicblock-chainlink/src/chainlink/fetch_cloner/compression.rs echo echo "=== helper error-handling paths ===" rg -n --type rust -C3 'filter_map|warn!|error!|return\s+vec!|collect::<Vec' magicblock-chainlink/src/chainlink/fetch_cloner/compression.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs` around lines 688 - 695, The call to resolve_compressed_delegated_accounts currently returns a plain Vec and can silently drop/skip failures; change its signature in compression.rs to return ChainlinkResult<Vec<AccountCloneRequest>> (or an explicit result that reports skipped pubkeys) and ensure it does not swallow errors internally (remove log-and-skip paths or surface them). Update the caller in fetch_cloner::mod.rs where resolve_compressed_delegated_accounts is invoked to handle the Result (propagate with ? or convert into a clear error) instead of unconditionally extending accounts_to_clone, so compressed fetch/parse failures reach the caller.test-integration/test-committor-service/tests/utils/transactions.rs (1)
78-123: 🧹 Nitpick | 🔵 TrivialAvoid stacking another polling loop on top of Photon retries.
PhotonIndexer::get_compressed_accountalready retries internally, so this outer 12x loop can turn a single missing record into a very slow test failure. If you're intentionally polling for a post-write predicate change, add a short comment explaining that state transition; otherwise a single indexed fetch is enough.Based on learnings: The PhotonIndexer used for compressed account fetches (
get_compressed_account) has built‑in retry logic (defaults to ~10 attempts), so tests should not add separate retry loops around compressed fetches unless there’s a specific need.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-integration/test-committor-service/tests/utils/transactions.rs` around lines 78 - 123, The macro get_compressed_account! currently wraps PhotonIndexer::get_compressed_account (which already retries) in an additional 12-try polling loop; remove that outer loop and perform a single fetch using $photon_client.get_compressed_account($address, None).await.ok().map(|acc| acc.value.clone()), then evaluate the $predicate once and panic if the account is missing or the predicate fails; keep the two-arity overload that calls the predicate variant unchanged. If you intentionally need to poll for a post-write state transition, instead add a short comment above get_compressed_account! documenting that behavior and reduce/align retries so you’re not duplicating Photon’s internal retry logic.test-integration/test-chainlink/src/ixtest_context.rs (1)
423-432:⚠️ Potential issue | 🟡 MinorUse
state_queue_pubkey_indexforaddress_queue_pubkey_index.The
address_queue_pubkey_indexfield is incorrectly set toaddress_merkle_tree_pubkey_index. Line 425-426 computesstate_queue_pubkey_indexfromOUTPUT_QUEUEbut it's never used. Based on the field semantics,address_queue_pubkey_indexshould reference the queue, not the tree.🐛 Proposed fix
let packed_address_tree_info = PackedAddressTreeInfo { root_index: rpc_result.addresses[0].root_index, address_merkle_tree_pubkey_index, - address_queue_pubkey_index: address_merkle_tree_pubkey_index, + address_queue_pubkey_index: state_queue_pubkey_index, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-integration/test-chainlink/src/ixtest_context.rs` around lines 423 - 432, PackedAddressTreeInfo is being constructed with address_queue_pubkey_index mistakenly set to address_merkle_tree_pubkey_index; use the previously computed state_queue_pubkey_index (from remaining_accounts.insert_or_get(OUTPUT_QUEUE)) instead. Update the PackedAddressTreeInfo literal so address_queue_pubkey_index = state_queue_pubkey_index while keeping address_merkle_tree_pubkey_index = address_merkle_tree_pubkey_index; this ensures the queue index references OUTPUT_QUEUE rather than the tree.magicblock-committor-service/src/intent_executor/task_info_fetcher.rs (1)
214-231: 🧹 Nitpick | 🔵 TrivialCompressed-specific errors are unreachable in RPC fetch path.
The
fetch_accounts_with_retriesmethod handles compressed-specific errors (NoCompressedAccount,NoCompressedData,MissingStateTrees,MissingAddress,MissingCompressedData) that cannot occur when callingfetch_accounts(which uses RPC, not Photon). While these branches won't cause runtime issues (they just break with the error), they add noise to the match.Consider adding a comment explaining these are included for enum exhaustiveness, or restructure to handle only reachable variants.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-committor-service/src/intent_executor/task_info_fetcher.rs` around lines 214 - 231, The match in fetch_accounts_with_retries currently lists compressed-specific TaskInfoFetcherError variants (NoCompressedAccount, NoCompressedData, MissingStateTrees, MissingAddress, MissingCompressedData) even though fetch_accounts (the RPC path) can never produce them; update the match in fetch_accounts_with_retries inside TaskInfoFetcher to either (a) add a short comment above those arms stating they are present only for enum exhaustiveness and unreachable for the RPC path, or (b) replace those arms with explicit unreachable handling (e.g., panic/unreachable macro) or restructure the matching to only handle reachable variants (IndexerrError, DeserializeError, LightRpcError, PhotonClientNotFound, LightSdkError) so the code reflects the actual RPC error surface and removes noise while keeping exhaustiveness.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@compressed-delegation-api/src/instruction.rs`:
- Around line 33-47: The #[repr(u64)] attribute on the
CompressedDelegationProgramInstruction enum is misleading because the enum has
data variants and repr(u64) only applies to fieldless enums; remove the
#[repr(u64)] line so the enum declaration begins with #[derive(Clone, Debug)]
followed by pub enum CompressedDelegationProgramInstruction { ... } to avoid
implying a fixed C-like layout.
In `@compressed-delegation-api/src/state.rs`:
- Around line 40-44: The implementation of DataHasher for
CompressedDelegationRecord incorrectly hard-codes Sha256; change the generic
hash implementation in fn hash<H: Hasher>(&self) -> Result<[u8; 32],
HasherError> to call H::hash(&bytes) (propagate/map its error as needed) instead
of Sha256::hash(&bytes), assign the result to let mut hash = H::hash(&bytes)?;,
then perform the existing mutation (hash[0] = 0) and return the hash; ensure any
required trait import or error mapping for HasherError is preserved.
In `@compressed-delegation-client/src/builders/commit_finalize.rs`:
- Line 35: The borsh::to_vec(&self.args).unwrap().as_slice() call in
commit_finalize.rs uses unwrap; change it to handle errors explicitly by either
returning a Result from the enclosing function (propagate the serialization
error) or replace unwrap with .expect("failed to serialize CommitFinalize args")
to document the assumption; locate the builder that references self.args
(CommitFinalizeBuilder / commit_finalize construction) and update the
serialization call there to use expect with contextual message or propagate the
error via Result.
In `@compressed-delegation-client/src/builders/delegate.rs`:
- Line 35: The call borsh::to_vec(&self.args).unwrap().as_slice() in delegate.rs
can panic; update the instruction() implementation to avoid an uncontextualized
unwrap—either change instruction() to return Result<Instruction, E> and
propagate the serialization error from borsh::to_vec(&self.args), or replace
unwrap() with .expect("DelegateArgs serialization invariant violated in
instruction()") to document the invariant; reference the exact expression
borsh::to_vec(&self.args).unwrap().as_slice() when making the change.
In `@compressed-delegation-client/src/builders/init_delegation_record.rs`:
- Around line 32-37: The code uses borsh::to_vec(&self.args).unwrap() when
building the data payload (see
DelegationProgramDiscriminator::InitDelegationRecord and self.args); replace the
unwrap() with an explicit expect(...) that documents the invariant (e.g.,
"serialization of InitDelegationRecord args must not fail") so failures produce
a clear message; apply the same expect-based pattern to other builders in this
crate for consistency.
In `@compressed-delegation-client/src/cpi/undelegate.rs`:
- Around line 17-23: The UndelegateCpi struct currently lacks a dedicated
program AccountInfo so the CPI can't pass the callee program to invoke_signed;
add a new field (e.g., program: AccountInfo<'a>) to UndelegateCpi and update the
CPI invocation logic (the code that builds the infos vector and calls
invoke_signed) to append this program AccountInfo to infos instead of relying on
remaining_accounts to carry the program; also update any other helper/path in
the same file (the other block around lines 31-50 that mirrors this behavior) to
use the new program field and ensure ix.program_id is represented by the
appended AccountInfo when calling invoke_signed.
In `@magicblock-chainlink/src/chainlink/fetch_cloner/compression.rs`:
- Around line 155-156: Create a tracked issue for implementing
commit_frequency_ms support for compressed delegation records and update both
TODO comments to reference that issue; specifically, open an issue describing
the missing commit_frequency_ms behavior and expected semantics, then replace
the two TODOs in resolve_compressed_accounts_to_clone and
should_refresh_undelegating_in_bank_compressed_account with a single clear
TODO/FIXME that includes the issue number, a short description, and a "WIP" note
so future readers can find the issue and avoid hardcoding commit_frequency_ms =
0; optionally add a comment suggesting where to wire the field once implemented
(e.g., from the compressed record metadata or config).
In `@magicblock-chainlink/src/remote_account_provider/mod.rs`:
- Line 1007: The info-level log line info!(rpc_accounts = ?rpc_accounts,
photon_accounts = ?photon_accounts, "Fetched accounts from RPC and Photon"); is
too verbose for production—change it to a lower verbosity (e.g., debug! or
trace!) so full account details are only logged when debugging; locate that
macro invocation in mod.rs and replace info! with debug! (or trace!) and keep
the structured fields (rpc_accounts and photon_accounts) unchanged.
- Around line 450-455: The error branch conflates "no endpoint" with the
existing MultiplePhotonEndpointsProvided variant; change the ok_or_else call to
return a proper "no endpoint" error (e.g.,
RemoteAccountProviderError::NoPhotonEndpointProvided) or add that variant to the
RemoteAccountProviderError enum if it doesn't exist, then use it in the photon()
handling before mapping to PhotonClientImpl::new_from_endpoint so
absent-endpoint errors are reported correctly instead of
MultiplePhotonEndpointsProvided.
In `@magicblock-committor-service/src/tasks/mod.rs`:
- Around line 173-183: The serialization test
test_args_task_instruction_serialization does not cover the new compressed
variants added to ArgsTask; update that test to include instances of
ArgsTask::CompressedCommit(CompressedCommitTask),
ArgsTask::CompressedFinalize(CompressedFinalizeTask), and
ArgsTask::CompressedUndelegate(CompressedUndelegateTask) (constructed with the
same pattern used for the legacy Commit/Finalize/Undelegate builders) and assert
their instruction serialization/roundtrip equivalence just like the existing
legacy cases so the new builders receive regression coverage.
In `@magicblock-committor-service/src/tasks/task_builder.rs`:
- Around line 175-196: The async closure currently borrows `account` from the
iterator which can lead to lifetime issues; change the mapping to capture owned
data and use an `async move` closure: extract/clone the key before the async
block (e.g., let pubkey = account.pubkey.clone();), then use `.map(|account| {
let pubkey = account.pubkey.clone(); async move { Ok((pubkey,
info_fetcher.get_compressed_data(&pubkey, None).await?)) } })` (also clone or
move `info_fetcher` into the closure if it must be owned) so the futures
produced by `FuturesUnordered` do not borrow from the iterator.
In `@programs/magicblock/src/schedule_transactions/process_schedule_commit.rs`:
- Around line 33-45: The wrapper process_schedule_commit simply forwards to
schedule_commit with identical parameters; remove this indirection by deleting
the thin wrapper and either (a) make schedule_commit pub(crate) with the
signature of process_schedule_commit so callers use schedule_commit directly, or
(b) rename schedule_commit to process_schedule_commit and remove the original
wrapper; update any call sites to reference the single remaining function
(process_schedule_commit or schedule_commit) and keep the same parameter types
and return type.
In `@test-integration/programs/flexi-counter/src/processor.rs`:
- Around line 613-645: The PDA's lamports are being drained before calling the
CPI so DelegateArgs.lamports is set to 0; capture the PDA's original lamports
into a variable (e.g., let original_lamports = counter_pda_info.lamports();)
before realloc/drain and pass that value into
compressed_delegation_client::DelegateArgs.lamports instead of 0; update the
call site in the DelegateCpi invocation (where DelegateArgs is constructed) to
use this variable so process_external_undelegate_compressed can restore the
correct balance.
In `@test-integration/test-chainlink/tests/ix_07_redeleg_us_same_slot.rs`:
- Around line 143-158: The test reads stale cache after calling
ctx.undelegate_compressed_counter(&counter_auth, true) then sleeping; add an
explicit cache refresh by calling the same ensure_accounts() (or the test
harness method that populates the cloner/chainlink cache) immediately after the
sleep and before reading ctx.cloner.get_account and asserting subscription
state. Update the sequence around ctx.undelegate_compressed_counter,
sleep_ms(1_500).await, and the subsequent calls to ctx.cloner.get_account(...)
and assert_not_subscribed!(ctx.chainlink, ...) so the assertions operate on
refreshed state.
In `@test-integration/test-committor-service/tests/common.rs`:
- Around line 72-73: TestFixture currently creates create_test_photon_indexer()
but the instance is dropped and create_task_info_fetcher()/get_compressed_data()
return CompressedData::default(), so the preparator test never loads real
delegation data; fix by wiring the photon indexer into the fixture and task info
fetcher so get_compressed_data() returns real compressed bytes: store the result
of create_test_photon_indexer() as a field on TestFixture (or pass it into
create_task_info_fetcher()), update create_task_info_fetcher() to use that
stored photon indexer when building compressed data, and make
get_compressed_data() return the actual compressed delegation record bytes
(compressed_delegation_record_bytes) produced by the photon indexer instead of
CompressedData::default(); ensure any other places in the fixture (also around
the block where TestFixture is constructed later) follow the same pattern so the
indexer is not dropped.
In `@test-integration/test-committor-service/tests/test_intent_executor.rs`:
- Around line 90-93: Add a test case that exercises the compressed executor path
by creating an executor invocation where compressed = true and ensuring the task
under test is marked compressed (so task.is_compressed() becomes relevant);
locate the places building the executor/commit fetch calls (the
CacheTaskInfoFetcher usage and the fetch_next_commit_ids invocations that
currently hardcode false) and add one variant that passes true for the
compressed flag (e.g., a compressed unfinalized-account or CPI-limit recovery
scenario), leaving the existing false cases intact so the new branch and
recovery plumbing are actually executed during the suite.
---
Outside diff comments:
In `@magicblock-committor-service/src/intent_executor/mod.rs`:
- Around line 425-449: The commit/finalize partitioning doesn't treat
TaskType::CompressedCommitAndFinalize correctly, so compressed tasks end up only
in finalize_stage_tasks; update the logic that builds commit_stage_tasks and
finalize_stage_tasks (the code that feeds TransactionStrategy/commit_strategy
and finalize_strategy) to either (A) treat TaskType::CompressedCommitAndFinalize
as a two-stage pair by placing its commit portion into commit_stage_tasks and
its finalize portion into finalize_stage_tasks (preserving strategy.compressed),
or (B) detect compressed tasks here and fail fast (return an error) before
constructing TransactionStrategy; ensure TransactionStrategy instances
(commit_strategy and finalize_strategy) reflect the corrected partitioning for
compressed tasks so retries do not run an empty commit stage and a
commit-containing finalize stage.
In `@magicblock-committor-service/src/intent_executor/single_stage_executor.rs`:
- Around line 216-223: The finalize retry currently calls
prepare_and_execute_strategy(..., None) which drops the original signature
carried by UnfinalizedAccountError; update the call to forward the original
failed signature into the new preparator parameter so compressed retries can
rebuild finalize work with commit context. Specifically, when invoking
prepare_and_execute_strategy for the single-stage finalize path, pass the
preparator that includes the original failed_signature (at minimum forward
*failed_signature when task.is_compressed()), so TransactionStrategy/prepare
logic can reconstruct compressed finalize work rather than using None.
In `@magicblock-committor-service/src/intent_executor/two_stage_executor.rs`:
- Around line 225-234: The retry call to inner.prepare_and_execute_strategy is
dropping the recovered commit signature by passing None as the final argument;
extract the signature carried by the UnfinalizedAccountError (the recovered
commit signature) and pass it into prepare_and_execute_strategy instead of None
so the TransactionStrategy retry has the slot/signature metadata it needs (i.e.,
replace the trailing None with Some(recovered_commit_signature) when calling
prepare_and_execute_strategy).
In `@magicblock-committor-service/src/tasks/task_strategist.rs`:
- Around line 214-227: The persistor is currently running before the strategy is
actually built so InconsistentTaskCompression errors can occur after a strategy
was persisted; change the flow to validate/build the strategy first (call
TransactionStrategy::try_new or the build_strategy validation that can return
InconsistentTaskCompression) and only if that succeeds run the PersistorVisitor
(the PersistorVisitor struct with PersistorContext::PersistStrategy); in short,
perform TransactionStrategy::try_new/build validation before invoking the
persistor and move the persistor invocation to after the successful strategy
creation.
In `@test-integration/programs/flexi-counter/src/processor.rs`:
- Around line 439-486: The CPI invocations build Instructions with AccountMeta
referencing the callee program (e.g., Instruction::new_with_bytes targeting
magic_program_info or system_program) but then call invoke/invoke_signed without
including that program's AccountInfo in the account slice; add the corresponding
program AccountInfo (magic_program_info for Magic CPIs, system_program_info for
system CPIs) to the slice passed to invoke/invoke_signed so the AccountInfos
match the Instruction's AccountMeta. Update the calls in process_schedule_task
(where ix targets magic_program_info and invoke_signed currently passes
&[payer_info, counter_pda_info]), process_cancel_task (same),
process_schedule_commit_compressed (add magic_program), and
process_external_undelegate_compressed (add system_program) to include the
program AccountInfo in the same order/position expected by the Instruction.
---
Duplicate comments:
In `@Cargo.toml`:
- Around line 90-94: The five git dependencies (light-client,
light-compressed-account, light-hasher, light-sdk, light-sdk-types) are pinned
to a commit hash (rev = "849e6aa"); replace each rev entry with a branch
reference (e.g., branch = "main" or the project’s canonical branch name) so
Cargo will follow branch updates, then run cargo update to refresh lockfile;
ensure you change the entries for the exact dependency keys named above.
In `@magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs`:
- Around line 688-695: The call to resolve_compressed_delegated_accounts
currently returns a plain Vec and can silently drop/skip failures; change its
signature in compression.rs to return ChainlinkResult<Vec<AccountCloneRequest>>
(or an explicit result that reports skipped pubkeys) and ensure it does not
swallow errors internally (remove log-and-skip paths or surface them). Update
the caller in fetch_cloner::mod.rs where resolve_compressed_delegated_accounts
is invoked to handle the Result (propagate with ? or convert into a clear error)
instead of unconditionally extending accounts_to_clone, so compressed
fetch/parse failures reach the caller.
In `@magicblock-committor-service/src/intent_executor/mod.rs`:
- Around line 341-347: The code incorrectly propagates a single
strategy.compressed flag through recovery (see
task_info_fetcher.fetch_next_commit_ids uses strategy.compressed and rebuilt
TransactionStrategy copies it), causing mixed compressed/regular tasks to be
fetched/prepared against the wrong source; fix by deriving compression on a
per-task or per-pubkey basis: group committed_pubkeys (or tasks) by their
compression property, call fetch_next_commit_ids for each group with the correct
compressed flag, and when rebuilding TransactionStrategy ensure the compressed
field is set per task/group rather than reusing the original
strategy.compressed; update the helpers that rebuild strategies (the ones around
the existing calls at the noted ranges) to assign compression per task and to
merge results from each group.
In `@magicblock-committor-service/src/intent_executor/task_info_fetcher.rs`:
- Around line 214-231: The match in fetch_accounts_with_retries currently lists
compressed-specific TaskInfoFetcherError variants (NoCompressedAccount,
NoCompressedData, MissingStateTrees, MissingAddress, MissingCompressedData) even
though fetch_accounts (the RPC path) can never produce them; update the match in
fetch_accounts_with_retries inside TaskInfoFetcher to either (a) add a short
comment above those arms stating they are present only for enum exhaustiveness
and unreachable for the RPC path, or (b) replace those arms with explicit
unreachable handling (e.g., panic/unreachable macro) or restructure the matching
to only handle reachable variants (IndexerrError, DeserializeError,
LightRpcError, PhotonClientNotFound, LightSdkError) so the code reflects the
actual RPC error surface and removes noise while keeping exhaustiveness.
In `@test-integration/test-chainlink/src/ixtest_context.rs`:
- Around line 423-432: PackedAddressTreeInfo is being constructed with
address_queue_pubkey_index mistakenly set to address_merkle_tree_pubkey_index;
use the previously computed state_queue_pubkey_index (from
remaining_accounts.insert_or_get(OUTPUT_QUEUE)) instead. Update the
PackedAddressTreeInfo literal so address_queue_pubkey_index =
state_queue_pubkey_index while keeping address_merkle_tree_pubkey_index =
address_merkle_tree_pubkey_index; this ensures the queue index references
OUTPUT_QUEUE rather than the tree.
In `@test-integration/test-committor-service/tests/utils/transactions.rs`:
- Around line 78-123: The macro get_compressed_account! currently wraps
PhotonIndexer::get_compressed_account (which already retries) in an additional
12-try polling loop; remove that outer loop and perform a single fetch using
$photon_client.get_compressed_account($address, None).await.ok().map(|acc|
acc.value.clone()), then evaluate the $predicate once and panic if the account
is missing or the predicate fails; keep the two-arity overload that calls the
predicate variant unchanged. If you intentionally need to poll for a post-write
state transition, instead add a short comment above get_compressed_account!
documenting that behavior and reduce/align retries so you’re not duplicating
Photon’s internal retry logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f57025a4-d137-4148-94e0-64569119442e
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.locktest-integration/Cargo.lockis excluded by!**/*.locktest-integration/programs/compressed_delegation/compressed_delegation.sois excluded by!**/*.so
📒 Files selected for processing (61)
Cargo.tomlcompressed-delegation-api/Cargo.tomlcompressed-delegation-api/src/instruction.rscompressed-delegation-api/src/lib.rscompressed-delegation-api/src/state.rscompressed-delegation-client/Cargo.tomlcompressed-delegation-client/src/builders/commit_finalize.rscompressed-delegation-client/src/builders/delegate.rscompressed-delegation-client/src/builders/init_delegation_record.rscompressed-delegation-client/src/builders/mod.rscompressed-delegation-client/src/builders/undelegate.rscompressed-delegation-client/src/cpi/commit_finalize.rscompressed-delegation-client/src/cpi/delegate.rscompressed-delegation-client/src/cpi/helpers.rscompressed-delegation-client/src/cpi/init_delegation_record.rscompressed-delegation-client/src/cpi/mod.rscompressed-delegation-client/src/cpi/undelegate.rscompressed-delegation-client/src/lib.rscompressed-delegation-client/src/mod.rscompressed-delegation-client/src/programs.rscompressed-delegation-client/src/shared.rsmagicblock-api/src/magic_validator.rsmagicblock-chainlink/src/chainlink/fetch_cloner/compression.rsmagicblock-chainlink/src/chainlink/fetch_cloner/mod.rsmagicblock-chainlink/src/chainlink/fetch_cloner/pipeline.rsmagicblock-chainlink/src/chainlink/fetch_cloner/tests.rsmagicblock-chainlink/src/remote_account_provider/mod.rsmagicblock-chainlink/src/remote_account_provider/remote_account.rsmagicblock-chainlink/tests/utils/test_context.rsmagicblock-committor-service/src/committor_processor.rsmagicblock-committor-service/src/intent_execution_manager.rsmagicblock-committor-service/src/intent_executor/mod.rsmagicblock-committor-service/src/intent_executor/single_stage_executor.rsmagicblock-committor-service/src/intent_executor/task_info_fetcher.rsmagicblock-committor-service/src/intent_executor/two_stage_executor.rsmagicblock-committor-service/src/service.rsmagicblock-committor-service/src/tasks/args_task.rsmagicblock-committor-service/src/tasks/mod.rsmagicblock-committor-service/src/tasks/task_builder.rsmagicblock-committor-service/src/tasks/task_strategist.rsmagicblock-committor-service/src/transaction_preparator/delivery_preparator.rsmagicblock-committor-service/src/transaction_preparator/mod.rsmagicblock-config/src/config/compression.rsprograms/magicblock/src/magicblock_processor.rsprograms/magicblock/src/schedule_transactions/process_schedule_commit.rstest-integration/Cargo.tomltest-integration/programs/flexi-counter/src/instruction.rstest-integration/programs/flexi-counter/src/processor.rstest-integration/test-chainlink/src/ixtest_context.rstest-integration/test-chainlink/src/test_context.rstest-integration/test-chainlink/tests/ix_01_ensure-accounts.rstest-integration/test-chainlink/tests/ix_03_deleg_after_sub.rstest-integration/test-chainlink/tests/ix_06_redeleg_us_separate_slots.rstest-integration/test-chainlink/tests/ix_07_redeleg_us_same_slot.rstest-integration/test-committor-service/tests/common.rstest-integration/test-committor-service/tests/test_delivery_preparator.rstest-integration/test-committor-service/tests/test_intent_executor.rstest-integration/test-committor-service/tests/test_ix_commit_local.rstest-integration/test-committor-service/tests/test_transaction_preparator.rstest-integration/test-committor-service/tests/utils/instructions.rstest-integration/test-committor-service/tests/utils/transactions.rs
| ctx.undelegate_compressed_counter(&counter_auth, true).await; | ||
|
|
||
| // Wait for pubsub update to trigger subscription handler | ||
| sleep_ms(1_500).await; | ||
|
|
||
| // Account should still be cloned as delegated to us | ||
| let account = ctx.cloner.get_account(&counter_pda).unwrap(); | ||
| assert_cloned_as_delegated!( | ||
| ctx.cloner, | ||
| &[counter_pda], | ||
| account.remote_slot(), | ||
| program_flexi_counter::id() | ||
| ); | ||
|
|
||
| // Accounts delegated to us should not be tracked via subscription | ||
| assert_not_subscribed!(ctx.chainlink, &[&counter_pda]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf "== undelegate_compressed_counter implementation ==\n"
rg -n --type=rust -C8 '\bundelegate_compressed_counter\s*\('
printf "\n== compressed same-slot test ==\n"
sed -n '90,170p' test-integration/test-chainlink/tests/ix_07_redeleg_us_same_slot.rsRepository: magicblock-labs/magicblock-validator
Length of output: 7387
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get the full implementation of undelegate_compressed_counter
sed -n '553,620p' test-integration/test-chainlink/src/ixtest_context.rsRepository: magicblock-labs/magicblock-validator
Length of output: 2821
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Continue from line 620 to see the rest
sed -n '620,700p' test-integration/test-chainlink/src/ixtest_context.rsRepository: magicblock-labs/magicblock-validator
Length of output: 3096
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find and get delegate_compressed_counter implementation
rg -n 'pub async fn delegate_compressed_counter' test-integration/test-chainlink/src/ixtest_context.rs -A 60Repository: magicblock-labs/magicblock-validator
Length of output: 2532
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get the rest of delegate_compressed_counter (lines 540-555)
sed -n '540,555p' test-integration/test-chainlink/src/ixtest_context.rsRepository: magicblock-labs/magicblock-validator
Length of output: 465
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check what undelegation_requested does
rg -n 'fn undelegation_requested' magicblock-chainlink/src -A 20Repository: magicblock-labs/magicblock-validator
Length of output: 1734
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check what get_account does
rg -n 'fn get_account' magicblock-chainlink/src -A 15Repository: magicblock-labs/magicblock-validator
Length of output: 13410
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Look for ensure_accounts implementation
rg -n 'async fn ensure_accounts' magicblock-chainlink/src -A 25 | head -60Repository: magicblock-labs/magicblock-validator
Length of output: 2253
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check how the test context works - look at initialization and cache management
sed -n '1,100p' test-integration/test-chainlink/src/ixtest_context.rs
# Also check if there's a delegate pattern that works differently in other tests
rg -n 'delegate_compressed_counter' test-integration/test-chainlink/tests -B 5 -A 10 | head -80Repository: magicblock-labs/magicblock-validator
Length of output: 11303
Add explicit state refresh in step 2 after redelegation.
Step 1 calls ensure_accounts() to populate the cache, but step 2 only sleeps and then reads from that cached state without refreshing. Without an explicit refresh after undelegate_compressed_counter(true) completes, the test reads stale data from step 1 and cannot verify that the account was actually redelegated. This differs from the pattern in ix_06, which calls ensure_accounts() after delegation before asserting.
🛠️ Proposed fix
ctx.undelegate_compressed_counter(&counter_auth, true).await;
- // Wait for pubsub update to trigger subscription handler
- sleep_ms(1_500).await;
+ ctx.chainlink
+ .ensure_accounts(
+ &pubkeys,
+ None,
+ AccountFetchOrigin::GetMultipleAccounts,
+ None,
+ )
+ .await
+ .unwrap();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-integration/test-chainlink/tests/ix_07_redeleg_us_same_slot.rs` around
lines 143 - 158, The test reads stale cache after calling
ctx.undelegate_compressed_counter(&counter_auth, true) then sleeping; add an
explicit cache refresh by calling the same ensure_accounts() (or the test
harness method that populates the cloner/chainlink cache) immediately after the
sleep and before reading ctx.cloner.get_account and asserting subscription
state. Update the sequence around ctx.undelegate_compressed_counter,
sleep_ms(1_500).await, and the subsequent calls to ctx.cloner.get_account(...)
and assert_not_subscribed!(ctx.chainlink, ...) so the assertions operate on
refreshed state.
| let _photon_indexer = create_test_photon_indexer(); | ||
|
|
There was a problem hiding this comment.
This fixture never returns real compressed data.
TestFixture now creates _photon_indexer, but create_task_info_fetcher() drops it on the floor and get_compressed_data() always returns CompressedData::default(). The new compressed preparator test therefore never loads the real delegation record and will keep empty/default compressed_delegation_record_bytes.
Also applies to: 131-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-integration/test-committor-service/tests/common.rs` around lines 72 -
73, TestFixture currently creates create_test_photon_indexer() but the instance
is dropped and create_task_info_fetcher()/get_compressed_data() return
CompressedData::default(), so the preparator test never loads real delegation
data; fix by wiring the photon indexer into the fixture and task info fetcher so
get_compressed_data() returns real compressed bytes: store the result of
create_test_photon_indexer() as a field on TestFixture (or pass it into
create_task_info_fetcher()), update create_task_info_fetcher() to use that
stored photon indexer when building compressed data, and make
get_compressed_data() return the actual compressed delegation record bytes
(compressed_delegation_record_bytes) produced by the photon indexer instead of
CompressedData::default(); ensure any other places in the fixture (also around
the block where TestFixture is constructed later) follow the same pattern so the
indexer is not dropped.
| let task_info_fetcher = Arc::new(CacheTaskInfoFetcher::new( | ||
| fixture.rpc_client.clone(), | ||
| fixture._photon_indexer.clone(), | ||
| )); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add one compressed = true executor case here.
Every updated fetch_next_commit_ids call still hardcodes false, so this suite doesn't exercise the new compressed branch or the task.is_compressed() recovery plumbing added in this PR. A single compressed unfinalized-account or CPI-limit recovery case would close that gap.
Also applies to: 142-146, 429-430, 629-630, 835-837
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-integration/test-committor-service/tests/test_intent_executor.rs` around
lines 90 - 93, Add a test case that exercises the compressed executor path by
creating an executor invocation where compressed = true and ensuring the task
under test is marked compressed (so task.is_compressed() becomes relevant);
locate the places building the executor/commit fetch calls (the
CacheTaskInfoFetcher usage and the fetch_next_commit_ids invocations that
currently hardcode false) and add one variant that passes true for the
compressed flag (e.g., a compressed unfinalized-account or CPI-limit recovery
scenario), leaving the existing false cases intact so the new branch and
recovery plumbing are actually executed during the suite.
Summary
Compatibility
compression.photon-urlandcompression.api-keyCloses #431
Summary by CodeRabbit
New Features
Chores