Conversation
📝 WalkthroughWalkthroughAdds read-only "Ref" APIs for batched Merkle trees, queues, and bloom filters, genericizes ZeroCopy cyclic vectors for immutable backings, introduces read-only non-inclusion checks, many test helpers/tests/builders, and updates consumers to use immutable slices and the new Ref types. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/lib.rs`:
- Around line 181-182: Reorder the pub mod declarations in lib.rs to satisfy the
project's alphabetical module order: move pub mod merkle_tree_ref and pub mod
queue_ref so they appear in alphabetical sequence relative to the other module
declarations (e.g., ensure queue, merkle_tree_ref, queue_ref are ordered
alphabetically or adjust surrounding modules so merkle_tree_ref and queue_ref
are placed correctly); update the module block containing merkle_tree_ref and
queue_ref to match the lint's expected ordering.
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 145-159: The docstring for check_input_queue_non_inclusion is
misleading: the function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().
In `@program-libs/batched-merkle-tree/src/queue_ref.rs`:
- Around line 40-62: Change BatchedQueueRef::from_account_info to accept
program_id: &[u8; 32] (matching BatchedMerkleTreeRef::from_account_info) instead
of &Pubkey, update its use of check_account_info to pass program_id directly
(remove program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Around line 10-34: The test test_merkle_tree_ref_matches_mutable uses a
hard-coded account_data size (3376) which is fragile; replace that with a call
to the existing MerkleTreeAccountBuilder to compute the correct size (e.g., use
MerkleTreeAccountBuilder::build() or a builder method that returns the required
bytes) and pass that buffer into BatchedMerkleTreeAccount::init, or if you must
keep a literal add a comment explaining how 3376 was derived; update the
account_data initialization to use MerkleTreeAccountBuilder so future layout
changes won’t break BatchedMerkleTreeAccount::init.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 284-297: The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`:
- Around line 209-219: The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.
In `@program-libs/bloom-filter/src/lib.rs`:
- Around line 132-169: Add rustdoc comments to the public BloomFilterRef type
and its public methods (BloomFilterRef, BloomFilterRef::new, and
BloomFilterRef::contains) describing their purpose, parameters, return values,
and any invariants (e.g., that store.len()*8 == capacity). Follow the existing
project's doc style used by BloomFilter::calculate_bloom_filter_size for
phrasing and include examples or notes on read-only semantics and expected input
(32-byte value for contains) where appropriate.
| /// Check non-inclusion in all bloom filters which are not zeroed. | ||
| pub fn check_input_queue_non_inclusion( | ||
| &self, | ||
| value: &[u8; 32], | ||
| ) -> Result<(), BatchedMerkleTreeError> { | ||
| for i in 0..self.queue_batches.num_batches as usize { | ||
| Batch::check_non_inclusion_ref( | ||
| self.queue_batches.batches[i].num_iters as usize, | ||
| self.queue_batches.batches[i].bloom_filter_capacity, | ||
| value, | ||
| self.bloom_filter_stores[i], | ||
| )?; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Doc comment is slightly misleading — the code checks all bloom filters, not just "not zeroed" ones.
The comment says "Check non-inclusion in all bloom filters which are not zeroed," but the implementation unconditionally iterates all num_batches bloom filters without checking bloom_filter_is_zeroed(). This is functionally correct — a zeroed bloom filter has no bits set, so check_non_inclusion_ref will always pass on it. But the comment creates a false expectation that there's a conditional skip happening.
📝 Suggested doc fix
- /// Check non-inclusion in all bloom filters which are not zeroed.
+ /// Check non-inclusion across all bloom filters.
+ /// Zeroed bloom filters pass trivially (no bits set).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Check non-inclusion in all bloom filters which are not zeroed. | |
| pub fn check_input_queue_non_inclusion( | |
| &self, | |
| value: &[u8; 32], | |
| ) -> Result<(), BatchedMerkleTreeError> { | |
| for i in 0..self.queue_batches.num_batches as usize { | |
| Batch::check_non_inclusion_ref( | |
| self.queue_batches.batches[i].num_iters as usize, | |
| self.queue_batches.batches[i].bloom_filter_capacity, | |
| value, | |
| self.bloom_filter_stores[i], | |
| )?; | |
| } | |
| Ok(()) | |
| } | |
| /// Check non-inclusion across all bloom filters. | |
| /// Zeroed bloom filters pass trivially (no bits set). | |
| pub fn check_input_queue_non_inclusion( | |
| &self, | |
| value: &[u8; 32], | |
| ) -> Result<(), BatchedMerkleTreeError> { | |
| for i in 0..self.queue_batches.num_batches as usize { | |
| Batch::check_non_inclusion_ref( | |
| self.queue_batches.batches[i].num_iters as usize, | |
| self.queue_batches.batches[i].bloom_filter_capacity, | |
| value, | |
| self.bloom_filter_stores[i], | |
| )?; | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 145 -
159, The docstring for check_input_queue_non_inclusion is misleading: the
function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().
| impl<'a> BatchedQueueRef<'a> { | ||
| /// Deserialize an output queue (immutable) from account info. | ||
| pub fn output_from_account_info<A: AccountInfoTrait>( | ||
| account_info: &A, | ||
| ) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> { | ||
| Self::from_account_info::<OUTPUT_STATE_QUEUE_TYPE_V2, A>( | ||
| &Pubkey::new_from_array(ACCOUNT_COMPRESSION_PROGRAM_ID), | ||
| account_info, | ||
| ) | ||
| } | ||
|
|
||
| pub(crate) fn from_account_info<const QUEUE_TYPE: u64, A: AccountInfoTrait>( | ||
| program_id: &Pubkey, | ||
| account_info: &A, | ||
| ) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> { | ||
| check_account_info::<BatchedQueueAccount, A>(&program_id.to_bytes(), account_info)?; | ||
| let data = account_info.try_borrow_data()?; | ||
| // SAFETY: We extend the lifetime of the borrowed data to 'a. | ||
| // The borrow is shared (immutable), so dropping the Ref guard | ||
| // restores pinocchio's borrow state correctly for shared borrows. | ||
| let data_slice: &'a [u8] = unsafe { std::slice::from_raw_parts(data.as_ptr(), data.len()) }; | ||
| Self::from_bytes::<QUEUE_TYPE>(data_slice, account_info.key().into()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor API inconsistency: program_id parameter type differs from merkle_tree_ref.rs.
In merkle_tree_ref.rs, from_account_info takes program_id: &[u8; 32], while here it takes program_id: &Pubkey. Both ultimately pass &[u8; 32] to check_account_info. This works, but the divergent signatures between sibling types can be surprising for consumers.
Consider aligning the parameter types across BatchedMerkleTreeRef::from_account_info and BatchedQueueRef::from_account_info — both are pub(crate), so the blast radius is small.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/queue_ref.rs` around lines 40 - 62,
Change BatchedQueueRef::from_account_info to accept program_id: &[u8; 32]
(matching BatchedMerkleTreeRef::from_account_info) instead of &Pubkey, update
its use of check_account_info to pass program_id directly (remove
program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.
| #[test] | ||
| fn test_queue_ref_pubkey() { | ||
| let (data, _expected_pubkey) = QueueAccountBuilder::output_queue().build(); | ||
| let queue_ref = BatchedQueueRef::output_from_bytes(&data).unwrap(); | ||
|
|
||
| // pubkey() should return the default pubkey (since output_from_bytes uses Pubkey::default()) | ||
| // Note: The builder creates the account with a unique pubkey but output_from_bytes | ||
| // passes Pubkey::default() in the from_bytes call | ||
| assert_eq!( | ||
| *queue_ref.pubkey(), | ||
| Pubkey::default(), | ||
| "pubkey() should return default pubkey" | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
The pubkey() returning Pubkey::default() is worth a closer look.
The comment on Lines 289-291 explains that output_from_bytes passes Pubkey::default() internally, which is why the builder's unique pubkey is ignored. This is the expected behavior for the from_bytes path (as opposed to from_account_info which extracts the pubkey from the account info). The test documents this correctly, but consider whether a complementary test exercising the from_account_info path (where pubkey is extracted from the actual account) would be valuable for completeness.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 284 - 297,
The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.
| pub fn build_with_wrong_queue_type(self, wrong_type: u64) -> (Vec<u8>, Pubkey) { | ||
| let (mut data, pubkey) = self.build(); | ||
| // In BatchedQueueMetadata, metadata is QueueMetadata which has: | ||
| // AccessMetadata (3 pubkeys = 96 bytes) + RolloverMetadata (7*u64 = 56 bytes) + | ||
| // associated_merkle_tree (32 bytes) + next_queue (32 bytes) + queue_type (8 bytes) | ||
| // Total offset from start of metadata = 96 + 56 + 32 + 32 = 216 | ||
| // Plus 8 for discriminator = 224 | ||
| let queue_type_offset = 8 + 96 + 56 + 32 + 32; | ||
| data[queue_type_offset..queue_type_offset + 8].copy_from_slice(&wrong_type.to_le_bytes()); | ||
| (data, pubkey) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded offset for queue_type is fragile.
The offset calculation 8 + 96 + 56 + 32 + 32 = 224 depends on the exact memory layout of QueueMetadata. If any field is added, reordered, or padded differently, this will silently corrupt the wrong bytes. Consider computing the offset programmatically, e.g., by finding the byte position of queue_type via memoffset::offset_of! or by serializing, modifying, and re-serializing.
The same concern applies to build_with_wrong_tree_type (line 116) which hardcodes tree_type_offset = 8.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`
around lines 209 - 219, The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.
| /// Immutable bloom filter reference for read-only access. | ||
| /// | ||
| /// Uses `&'a [u8]` instead of `&'a mut [u8]` for the store, | ||
| /// enabling shared borrows of account data. | ||
| pub struct BloomFilterRef<'a> { | ||
| pub num_iters: usize, | ||
| pub capacity: u64, | ||
| pub store: &'a [u8], | ||
| } | ||
|
|
||
| impl<'a> BloomFilterRef<'a> { | ||
| pub fn new( | ||
| num_iters: usize, | ||
| capacity: u64, | ||
| store: &'a [u8], | ||
| ) -> Result<Self, BloomFilterError> { | ||
| if store.len() * 8 != capacity as usize { | ||
| return Err(BloomFilterError::InvalidStoreCapacity); | ||
| } | ||
| Ok(Self { | ||
| num_iters, | ||
| capacity, | ||
| store, | ||
| }) | ||
| } | ||
|
|
||
| pub fn contains(&self, value: &[u8; 32]) -> bool { | ||
| use bitvec::prelude::*; | ||
| let bits = BitSlice::<u8, Msb0>::from_slice(self.store); | ||
| for i in 0..self.num_iters { | ||
| let probe_index = BloomFilter::probe_index_keccak(value, i, &self.capacity); | ||
| if !bits[probe_index] { | ||
| return false; | ||
| } | ||
| } | ||
| true | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Well-designed immutable view — clean separation of concerns.
The BloomFilterRef struct and its contains implementation correctly mirror the read-only semantics of BloomFilter::contains while avoiding the unnecessary &mut self requirement. The membership check logic is consistent with the mutable counterpart, and the from_slice usage (vs. from_slice_mut) properly reflects the immutable borrow.
One small gap against the coding guidelines: the public methods new and contains on BloomFilterRef lack rustdoc comments. As per coding guidelines, program-libs/bloom-filter/**/*.rs should "Document public API types and methods using rustdoc comments, including… BloomFilter::calculate_bloom_filter_size (optimal bit count calculation)…" — the same standard applies to the new BloomFilterRef methods.
🤖 Prompt for AI Agents
In `@program-libs/bloom-filter/src/lib.rs` around lines 132 - 169, Add rustdoc
comments to the public BloomFilterRef type and its public methods
(BloomFilterRef, BloomFilterRef::new, and BloomFilterRef::contains) describing
their purpose, parameters, return values, and any invariants (e.g., that
store.len()*8 == capacity). Follow the existing project's doc style used by
BloomFilter::calculate_bloom_filter_size for phrasing and include examples or
notes on read-only semantics and expected input (32-byte value for contains)
where appropriate.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 119-127: The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 10-23: Replace the hard‑coded account buffer length (vec![0u8;
1000]) with a size derived from the QueueAccountBuilder (or an explicit helper)
so the test uses the same serialized layout as production; call the builder’s
size/serialized_len helper (or build a default QueueAccount via
QueueAccountBuilder and use its serialized size) and allocate account_data using
that derived length before initializing the queue structures like QueueMetadata
and queue init.
| // 3. Parse root history (cyclic vec). | ||
| let (root_history, account_data) = | ||
| ZeroCopyCyclicVecRefU64::<[u8; 32]>::from_bytes_at(account_data)?; | ||
|
|
||
| // 4. Parse bloom filter stores (immutable). | ||
| let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes(); | ||
| let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size); | ||
| let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size); | ||
|
|
There was a problem hiding this comment.
Guard bloom filter parsing against short buffers.
Line 125 uses split_at without a length check; malformed or truncated input can panic instead of returning a structured error. Prefer an explicit size check and return a ZeroCopyError.
🛠️ Suggested fix
- let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
- let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
- let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);
+ let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
+ let total_bf_bytes = bloom_filter_size
+ .checked_mul(2)
+ .ok_or(ZeroCopyError::InvalidConversion)?;
+ if account_data.len() < total_bf_bytes {
+ return Err(
+ ZeroCopyError::InsufficientMemoryAllocated(account_data.len(), total_bf_bytes)
+ .into(),
+ );
+ }
+ let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
+ let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 119 -
127, The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Line 6: The import statement brings in pubkey::Pubkey but Pubkey is unused and
causes a build error; update the import to remove the unused symbol (keep
TreeType) so the file imports only what it uses (i.e., remove Pubkey from the
use light_compressed_account::{...} declaration).
- Around line 185-221: Run rustfmt (cargo fmt) on the test file to fix
formatting issues (trailing whitespace/indentation) reported around the match
arms manipulating BatchedMerkleTreeAccount and BloomFilter; specifically
reformat the match block that contains the
BatchedMerkleTreeAccount::state_from_bytes calls, the BloomFilter::new/insert
sequence, and the sequence_number increment so rustfmt normalizes whitespace and
indentation and resolves the lint warnings.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 74-78: Replace the boolean equality assertions with idiomatic
boolean assertions: change uses of
assert_eq!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), true,
"...") and the similar assertions on lines 79-83 to
assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.
| use light_batched_merkle_tree::{ | ||
| merkle_tree::BatchedMerkleTreeAccount, merkle_tree_ref::BatchedMerkleTreeRef, | ||
| }; | ||
| use light_compressed_account::{pubkey::Pubkey, TreeType}; |
There was a problem hiding this comment.
Unused import pubkey::Pubkey breaks the build.
The CI fails with unused import: pubkey::Pubkey. This import isn't used anywhere in the file — all Pubkey values come from builder return types. Since Rust treats unused imports as errors in CI (likely #[deny(unused_imports)] or -D warnings), this blocks the pipeline.
🔧 Fix: remove the unused import
-use light_compressed_account::{pubkey::Pubkey, TreeType};
+use light_compressed_account::TreeType;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use light_compressed_account::{pubkey::Pubkey, TreeType}; | |
| use light_compressed_account::TreeType; |
🧰 Tools
🪛 GitHub Actions: rust
[error] 6-6: unused import: pubkey::Pubkey
🪛 GitHub Check: Test batched-merkle-tree-simulate
[failure] 6-6:
unused import: pubkey::Pubkey
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` at line 6, The
import statement brings in pubkey::Pubkey but Pubkey is unused and causes a
build error; update the import to remove the unused symbol (keep TreeType) so
the file imports only what it uses (i.e., remove Pubkey from the use
light_compressed_account::{...} declaration).
| 0 => { | ||
| // Push random root. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| tree_mut.root_history.push(rng.gen()); | ||
| } | ||
| 1 => { | ||
| // Insert into bloom filter of a random batch. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| let batch_idx = rng.gen_range(0..2usize); | ||
| let num_iters = | ||
| tree_mut.queue_batches.batches[batch_idx].num_iters as usize; | ||
| let capacity = | ||
| tree_mut.queue_batches.batches[batch_idx].bloom_filter_capacity; | ||
| let value: [u8; 32] = rng.gen(); | ||
| let mut bf = BloomFilter::new( | ||
| num_iters, | ||
| capacity, | ||
| &mut tree_mut.bloom_filter_stores[batch_idx], | ||
| ) | ||
| .unwrap(); | ||
| bf.insert(&value).unwrap(); | ||
| } | ||
| 2 => { | ||
| // Increment sequence number. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| tree_mut.sequence_number += 1; | ||
| } | ||
| _ => unreachable!(), | ||
| } | ||
|
|
||
| // Clone data so we can deserialize both paths independently. |
There was a problem hiding this comment.
Formatting lint warnings on these lines.
The static analysis (lint check) reports formatting diffs on lines 185, 193, 211, and 221. These are likely trailing whitespace or indentation mismatches that rustfmt would fix. Run cargo fmt on this file to resolve.
🧰 Tools
🪛 GitHub Check: lint
[warning] 221-221:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 211-211:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 193-193:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 185-185:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` around lines 185 -
221, Run rustfmt (cargo fmt) on the test file to fix formatting issues (trailing
whitespace/indentation) reported around the match arms manipulating
BatchedMerkleTreeAccount and BloomFilter; specifically reformat the match block
that contains the BatchedMerkleTreeAccount::state_from_bytes calls, the
BloomFilter::new/insert sequence, and the sequence_number increment so rustfmt
normalizes whitespace and indentation and resolves the lint warnings.
| assert_eq!( | ||
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | ||
| true, | ||
| "Valid index with matching hash should return true" | ||
| ); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer assert! over assert_eq!(…, true, …) for boolean results.
Minor style point — assert_eq!(expr, true, msg) is functionally correct but slightly less idiomatic Rust. assert!(expr, msg) is more concise and produces the same panic on failure.
Same applies to lines 79-83.
✏️ Optional cleanup
- assert_eq!(
- queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
- true,
- "Valid index with matching hash should return true"
- );
- assert_eq!(
- queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
- true,
- "Second element with matching hash should return true"
- );
+ assert!(
+ queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
+ "Valid index with matching hash should return true"
+ );
+ assert!(
+ queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
+ "Second element with matching hash should return true"
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert_eq!( | |
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | |
| true, | |
| "Valid index with matching hash should return true" | |
| ); | |
| assert!( | |
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | |
| "Valid index with matching hash should return true" | |
| ); | |
| assert!( | |
| queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(), | |
| "Second element with matching hash should return true" | |
| ); |
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 74 - 78,
Replace the boolean equality assertions with idiomatic boolean assertions:
change uses of assert_eq!(queue_ref.prove_inclusion_by_index(0,
&test_hash_1).unwrap(), true, "...") and the similar assertions on lines 79-83
to assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.
| pubkey: &Pubkey, | ||
| ) -> Result<BatchedMerkleTreeRef<'a>, BatchedMerkleTreeError> { | ||
| // 1. Skip discriminator. | ||
| let (_discriminator, account_data) = account_data.split_at(DISCRIMINATOR_LEN); |
There was a problem hiding this comment.
check discriminator here
…ueRef Add test helpers (builders and assertion utilities) and 29 new tests covering deserialization, error handling, root history, bloom filters, queue inclusion proofs, and configuration variants. Migrate manual account construction to use builders where possible and remove dead code.
Entire-Checkpoint: 842763084445
0b0edc0 to
92483e6
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
CLAUDE.md (2)
5-79:⚠️ Potential issue | 🟡 MinorRequired CLAUDE.md sections are still missing.
This file doesn't include the mandated Summary/Used in/Source Code Structure sections, so CLAUDE.md compliance is incomplete. As per coding guidelines: Every crate must have aCLAUDE.mdfile at the root containing: Summary (2-5 bullet points), Used in section, Navigation instructions, and High-level sections (Accounts/Instructions/Source Code Structure based on crate type), with optional Config Requirements and Security Considerations sections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 5 - 79, Add the missing mandated sections to CLAUDE.md: include a "Summary" with 2–5 bullet points, a "Used in" section listing the crates/projects that import this crate, clear "Navigation instructions" describing where to find docs and examples, and the required high-level sections tailored to crate type ("Accounts", "Instructions", and "Source Code Structure"). Also optionally add "Config Requirements" and "Security Considerations". Update the top-level headings in the existing CLAUDE.md (the file you modified) to include these sections so the file matches the repository guideline for every crate.
17-61:⚠️ Potential issue | 🟡 MinorAdd a language tag to the directory tree code fence.
Markdownlint flags fenced blocks without a language; usetextfor the tree block.Suggested fix
-``` +```text🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CLAUDE.md` around lines 17 - 61, The fenced directory-tree block in CLAUDE.md is missing a language tag which triggers markdownlint; update the triple-backtick fence enclosing the tree (the block starting with "light-protocol/") to include the language tag "text" by changing ``` to ```text so the tree is treated as plain text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@program-libs/batched-merkle-tree/src/queue_ref.rs`:
- Around line 91-106: The two uses of account_data.split_at(metadata_size) can
panic on truncated buffers; before each split check account_data.len() >=
metadata_size and if not return a ZeroCopyError (wrap as
ZeroCopyError::from(...)) instead of splitting, then proceed to call
Ref::from_prefix/meta parsing; do the same before from_prefix_with_elems by
ensuring account_data.len() has at least capacity * elem_size available (use
capacity0/capacity1 and element size for [[u8; 32]]) and return a ZeroCopyError
on short data so metadata_size, account_data.split_at(metadata_size),
value_vec_meta0/value_vec_meta1, and Ref::from_prefix_with_elems are guarded
against panics.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`:
- Around line 8-119: The MerkleTreeAccountBuilder and its methods
(MerkleTreeAccountBuilder, build, build_with_bad_discriminator,
build_with_wrong_tree_type, etc.) are triggering dead-code warnings; silence
them by adding #[allow(dead_code)] to the builder (place the attribute directly
above the MerkleTreeAccountBuilder struct) so the linter stops flagging unused
items, or alternatively gate the whole builder with #[cfg(test)] if it should
only exist for tests; update the attribute on the same declarations that define
the builder and its impl so the warning is suppressed.
- Around line 68-81: Clippy flags field_reassign_with_default in calculate_size:
instead of creating temp_metadata with BatchedMerkleTreeMetadata::default() and
then reassigning fields, construct temp_metadata as a single literal with the
required fields set (root_history_capacity, height, tree_type, capacity,
queue_batches with bloom_filter_capacity and initialized via queue_batches.init
or by building the QueueBatches value before composing the metadata) so you no
longer mutate default fields; keep the call to queue_batches.init (or build the
initialized QueueBatches instance) and then call get_account_size() on the
assembled BatchedMerkleTreeMetadata.
In `@programs/system/src/utils.rs`:
- Around line 28-33: The function get_queue_and_tree_accounts currently uses
direct indexing (&accounts[queue_index], &accounts[tree_index]) which can panic
on out-of-bounds input; update it to use accounts.get(queue_index) and
accounts.get(tree_index), validate both Option results, and return an
appropriate SystemProgramError (e.g., an OutOfBounds or InvalidAccountIndex
variant) wrapped in Err when either index is missing so the function never
aborts on malformed input.
---
Outside diff comments:
In `@CLAUDE.md`:
- Around line 5-79: Add the missing mandated sections to CLAUDE.md: include a
"Summary" with 2–5 bullet points, a "Used in" section listing the
crates/projects that import this crate, clear "Navigation instructions"
describing where to find docs and examples, and the required high-level sections
tailored to crate type ("Accounts", "Instructions", and "Source Code
Structure"). Also optionally add "Config Requirements" and "Security
Considerations". Update the top-level headings in the existing CLAUDE.md (the
file you modified) to include these sections so the file matches the repository
guideline for every crate.
- Around line 17-61: The fenced directory-tree block in CLAUDE.md is missing a
language tag which triggers markdownlint; update the triple-backtick fence
enclosing the tree (the block starting with "light-protocol/") to include the
language tag "text" by changing ``` to ```text so the tree is treated as plain
text.
---
Duplicate comments:
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 136-147: Update the doc comment for
check_input_queue_non_inclusion to accurately describe its behavior: it iterates
through all batches and calls Batch::check_non_inclusion_ref for each bloom
filter (including zeroed ones) rather than skipping zeroed filters; modify the
comment text to state that all bloom filters are checked (or remove the clause
about skipping zeroed filters) so the documentation matches the implementation.
- Around line 121-125: The code calls account_data.split_at(...) twice to
extract bf_store_0 and bf_store_1 but will panic on short buffers; before each
split, check account_data.len() >= bloom_filter_size (use the bloom_filter_size
from metadata.queue_batches.get_bloomfilter_size_bytes()) and return a
ZeroCopyError (or convert to the existing error type) if too small, then perform
the splits to produce bf_store_0 and bf_store_1; update any function
signature/return path that reads these values to propagate the ZeroCopyError.
In `@program-libs/batched-merkle-tree/src/queue_ref.rs`:
- Around line 41-56: The from_account_info function on BatchedQueueRef currently
accepts program_id: &Pubkey which is inconsistent with the sibling
BatchedMerkleTreeRef that uses a &[u8; 32]; change the signature of
BatchedQueueRef::from_account_info<const QUEUE_TYPE: u64, A:
AccountInfoTrait>(program_id: &[u8; 32], account_info: &A) and update callers
such as output_from_account_info to pass program_id.as_ref()/Pubkey::to_bytes()
(or a reference to the 32-byte array) instead of &Pubkey, and adjust the
internal use (remove .to_bytes() call on program_id) to operate on the provided
&[u8;32]; ensure check_account_info is called with the program_id byte slice.
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Around line 3-6: Remove the unused Pubkey import from the use statement in
merkle_tree_ref.rs: edit the import line that currently reads "use
light_compressed_account::{pubkey::Pubkey, TreeType};" and drop "pubkey::Pubkey"
so only the used symbol(s) like TreeType remain; this will clear the
unused-import lint and fix the -D warnings build failure.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 75-85: Replace non-idiomatic boolean equality assertions with
direct boolean assertions: change uses of assert_eq!(..., true) when asserting
the result of queue_ref.prove_inclusion_by_index(…) to
assert!(queue_ref.prove_inclusion_by_index(…).unwrap()) (and similarly for index
1 / test_hash_2 and the other occurrences referenced around lines 202-205) so
the tests use assert! for boolean checks and satisfy the lint.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`:
- Around line 111-116: The test currently writes wrong_type at a hardcoded byte
offset (tree_type_offset = 8) in build_with_wrong_tree_type, which is brittle;
instead compute the byte offset programmatically by deriving the layout from the
real metadata type (e.g., use the BatchedMerkleTreeMetadata struct's
serialization/layout helpers or size_of of preceding fields, or serialize a
default metadata and locate the tree_type field), then write wrong_type at that
computed offset; apply the same change to the other occurrences noted around
lines 206-215 (the queue_type/other field writes) so tests remain correct when
the struct layout changes.
In `@program-libs/bloom-filter/src/lib.rs`:
- Around line 141-154: Add rustdoc comments for the public API: document
BloomFilterRef::new and BloomFilterRef::contains (and likewise ensure
BloomFilter, BloomFilterError, BloomFilter::calculate_bloom_filter_size,
BloomFilter::calculate_optimal_hash_functions, and
BloomFilter::probe_index_keccak are documented) by adding /// comments above
each item describing its purpose, parameters, return value, and error
conditions; for example, for BloomFilterRef::new describe the meaning of
num_iters, capacity, and store and that it returns
Err(BloomFilterError::InvalidStoreCapacity) when store length mismatches, and
for contains describe the input slice ([u8; 32]) and what true/false indicates.
Ensure rustdoc uses conventional sections (Examples/Errors/Returns) and
documents public types and variants of BloomFilterError.
| let metadata_size = ZeroCopyVecU64::<[u8; 32]>::metadata_size(); | ||
|
|
||
| let (meta0_bytes, account_data) = account_data.split_at(metadata_size); | ||
| let (value_vec_meta0, _padding) = | ||
| Ref::<&'a [u8], [u64; 2]>::from_prefix(meta0_bytes).map_err(ZeroCopyError::from)?; | ||
| let capacity0 = value_vec_meta0[1] as usize; // CAPACITY_INDEX = 1 | ||
| let (value_vec_data0, account_data) = | ||
| Ref::<&'a [u8], [[u8; 32]]>::from_prefix_with_elems(account_data, capacity0) | ||
| .map_err(ZeroCopyError::from)?; | ||
|
|
||
| let (meta1_bytes, account_data) = account_data.split_at(metadata_size); | ||
| let (value_vec_meta1, _padding) = | ||
| Ref::<&'a [u8], [u64; 2]>::from_prefix(meta1_bytes).map_err(ZeroCopyError::from)?; | ||
| let capacity1 = value_vec_meta1[1] as usize; | ||
| let (value_vec_data1, _account_data) = | ||
| Ref::<&'a [u8], [[u8; 32]]>::from_prefix_with_elems(account_data, capacity1) |
There was a problem hiding this comment.
Avoid split_at panics on short buffers
Both split_at(metadata_size) calls can panic if the account data is truncated. Add explicit length checks and return a ZeroCopyError instead.
🛠️ Suggested fix
let metadata_size = ZeroCopyVecU64::<[u8; 32]>::metadata_size();
- let (meta0_bytes, account_data) = account_data.split_at(metadata_size);
+ if account_data.len() < metadata_size {
+ return Err(
+ ZeroCopyError::InsufficientMemoryAllocated(account_data.len(), metadata_size).into(),
+ );
+ }
+ let (meta0_bytes, account_data) = account_data.split_at(metadata_size);
let (value_vec_meta0, _padding) =
Ref::<&'a [u8], [u64; 2]>::from_prefix(meta0_bytes).map_err(ZeroCopyError::from)?;
@@
- let (meta1_bytes, account_data) = account_data.split_at(metadata_size);
+ if account_data.len() < metadata_size {
+ return Err(
+ ZeroCopyError::InsufficientMemoryAllocated(account_data.len(), metadata_size).into(),
+ );
+ }
+ let (meta1_bytes, account_data) = account_data.split_at(metadata_size);
let (value_vec_meta1, _padding) =
Ref::<&'a [u8], [u64; 2]>::from_prefix(meta1_bytes).map_err(ZeroCopyError::from)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@program-libs/batched-merkle-tree/src/queue_ref.rs` around lines 91 - 106, The
two uses of account_data.split_at(metadata_size) can panic on truncated buffers;
before each split check account_data.len() >= metadata_size and if not return a
ZeroCopyError (wrap as ZeroCopyError::from(...)) instead of splitting, then
proceed to call Ref::from_prefix/meta parsing; do the same before
from_prefix_with_elems by ensuring account_data.len() has at least capacity *
elem_size available (use capacity0/capacity1 and element size for [[u8; 32]])
and return a ZeroCopyError on short data so metadata_size,
account_data.split_at(metadata_size), value_vec_meta0/value_vec_meta1, and
Ref::from_prefix_with_elems are guarded against panics.
| /// Builder for creating valid and invalid BatchedMerkleTreeAccount test data. | ||
| pub struct MerkleTreeAccountBuilder { | ||
| tree_type: TreeType, | ||
| batch_size: u64, | ||
| zkp_batch_size: u64, | ||
| root_history_capacity: u32, | ||
| height: u32, | ||
| num_iters: u64, | ||
| bloom_filter_capacity: u64, | ||
| } | ||
|
|
||
| impl MerkleTreeAccountBuilder { | ||
| /// Create a state tree builder with default test parameters. | ||
| pub fn state_tree() -> Self { | ||
| Self { | ||
| tree_type: TreeType::StateV2, | ||
| batch_size: 5, | ||
| zkp_batch_size: 1, | ||
| root_history_capacity: 10, | ||
| height: 40, | ||
| num_iters: 1, | ||
| bloom_filter_capacity: 8000, | ||
| } | ||
| } | ||
|
|
||
| pub fn with_tree_type(mut self, tree_type: TreeType) -> Self { | ||
| self.tree_type = tree_type; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_batch_size(mut self, batch_size: u64) -> Self { | ||
| self.batch_size = batch_size; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_zkp_batch_size(mut self, zkp_batch_size: u64) -> Self { | ||
| self.zkp_batch_size = zkp_batch_size; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_root_history_capacity(mut self, capacity: u32) -> Self { | ||
| self.root_history_capacity = capacity; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_height(mut self, height: u32) -> Self { | ||
| self.height = height; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_bloom_filter_capacity(mut self, capacity: u64) -> Self { | ||
| self.bloom_filter_capacity = capacity; | ||
| self | ||
| } | ||
|
|
||
| pub fn with_num_iters(mut self, num_iters: u64) -> Self { | ||
| self.num_iters = num_iters; | ||
| self | ||
| } | ||
|
|
||
| /// Pre-calculate the exact account size needed for these parameters. | ||
| fn calculate_size(&self) -> usize { | ||
| let mut temp_metadata = BatchedMerkleTreeMetadata::default(); | ||
| temp_metadata.root_history_capacity = self.root_history_capacity; | ||
| temp_metadata.height = self.height; | ||
| temp_metadata.tree_type = self.tree_type as u64; | ||
| temp_metadata.capacity = 2u64.pow(self.height); | ||
| temp_metadata | ||
| .queue_batches | ||
| .init(self.batch_size, self.zkp_batch_size) | ||
| .unwrap(); | ||
| temp_metadata.queue_batches.bloom_filter_capacity = self.bloom_filter_capacity; | ||
| temp_metadata.get_account_size().unwrap() | ||
| } | ||
|
|
||
| /// Build a valid account with correctly initialized data. | ||
| pub fn build(self) -> (Vec<u8>, Pubkey) { | ||
| let pubkey = Pubkey::new_unique(); | ||
| let size = self.calculate_size(); | ||
| let mut data = vec![0u8; size]; | ||
| BatchedMerkleTreeAccount::init( | ||
| &mut data, | ||
| &pubkey, | ||
| MerkleTreeMetadata::default(), | ||
| self.root_history_capacity, | ||
| self.batch_size, | ||
| self.zkp_batch_size, | ||
| self.height, | ||
| self.num_iters, | ||
| self.bloom_filter_capacity, | ||
| self.tree_type, | ||
| ) | ||
| .unwrap(); | ||
| (data, pubkey) | ||
| } | ||
|
|
||
| /// Build account with corrupted discriminator. | ||
| pub fn build_with_bad_discriminator(self) -> (Vec<u8>, Pubkey) { | ||
| let (mut data, pubkey) = self.build(); | ||
| data[0..8].copy_from_slice(b"BadDiscr"); | ||
| (data, pubkey) | ||
| } | ||
|
|
||
| /// Build account with wrong tree type field (but correct discriminator). | ||
| pub fn build_with_wrong_tree_type(self, wrong_type: u64) -> (Vec<u8>, Pubkey) { | ||
| let (mut data, pubkey) = self.build(); | ||
| // tree_type is the first field of BatchedMerkleTreeMetadata, right after discriminator | ||
| let tree_type_offset = 8; // 8 bytes discriminator | ||
| data[tree_type_offset..tree_type_offset + 8].copy_from_slice(&wrong_type.to_le_bytes()); | ||
| (data, pubkey) | ||
| } | ||
| } |
There was a problem hiding this comment.
Lint is failing due to unused builder types.
CI reports both builders and their associated items as never constructed; with warnings-as-errors this blocks the pipeline. Either exercise them in tests or gate them to avoid dead-code lint.
Also applies to: 121-223
🧰 Tools
🪛 GitHub Actions: lint
[error] 9-9: struct MerkleTreeAccountBuilder is never constructed
🪛 GitHub Check: lint
[failure] 71-71:
field assignment outside of initializer for an instance created with Default::default()
[failure] 21-21:
multiple associated items are never used
[failure] 9-9:
struct MerkleTreeAccountBuilder is never constructed
🪛 GitHub Check: Test batched-merkle-tree-simulate
[failure] 21-21:
multiple associated items are never used
[failure] 9-9:
struct MerkleTreeAccountBuilder is never constructed
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`
around lines 8 - 119, The MerkleTreeAccountBuilder and its methods
(MerkleTreeAccountBuilder, build, build_with_bad_discriminator,
build_with_wrong_tree_type, etc.) are triggering dead-code warnings; silence
them by adding #[allow(dead_code)] to the builder (place the attribute directly
above the MerkleTreeAccountBuilder struct) so the linter stops flagging unused
items, or alternatively gate the whole builder with #[cfg(test)] if it should
only exist for tests; update the attribute on the same declarations that define
the builder and its impl so the warning is suppressed.
| /// Pre-calculate the exact account size needed for these parameters. | ||
| fn calculate_size(&self) -> usize { | ||
| let mut temp_metadata = BatchedMerkleTreeMetadata::default(); | ||
| temp_metadata.root_history_capacity = self.root_history_capacity; | ||
| temp_metadata.height = self.height; | ||
| temp_metadata.tree_type = self.tree_type as u64; | ||
| temp_metadata.capacity = 2u64.pow(self.height); | ||
| temp_metadata | ||
| .queue_batches | ||
| .init(self.batch_size, self.zkp_batch_size) | ||
| .unwrap(); | ||
| temp_metadata.queue_batches.bloom_filter_capacity = self.bloom_filter_capacity; | ||
| temp_metadata.get_account_size().unwrap() | ||
| } |
There was a problem hiding this comment.
Clippy field_reassign_with_default is tripping in calculate_size.
Initialize the metadata in a single literal to satisfy the lint.
Suggested fix
- let mut temp_metadata = BatchedMerkleTreeMetadata::default();
- temp_metadata.root_history_capacity = self.root_history_capacity;
- temp_metadata.height = self.height;
- temp_metadata.tree_type = self.tree_type as u64;
- temp_metadata.capacity = 2u64.pow(self.height);
+ let mut temp_metadata = BatchedMerkleTreeMetadata {
+ root_history_capacity: self.root_history_capacity,
+ height: self.height,
+ tree_type: self.tree_type as u64,
+ capacity: 2u64.pow(self.height),
+ ..Default::default()
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Pre-calculate the exact account size needed for these parameters. | |
| fn calculate_size(&self) -> usize { | |
| let mut temp_metadata = BatchedMerkleTreeMetadata::default(); | |
| temp_metadata.root_history_capacity = self.root_history_capacity; | |
| temp_metadata.height = self.height; | |
| temp_metadata.tree_type = self.tree_type as u64; | |
| temp_metadata.capacity = 2u64.pow(self.height); | |
| temp_metadata | |
| .queue_batches | |
| .init(self.batch_size, self.zkp_batch_size) | |
| .unwrap(); | |
| temp_metadata.queue_batches.bloom_filter_capacity = self.bloom_filter_capacity; | |
| temp_metadata.get_account_size().unwrap() | |
| } | |
| /// Pre-calculate the exact account size needed for these parameters. | |
| fn calculate_size(&self) -> usize { | |
| let mut temp_metadata = BatchedMerkleTreeMetadata { | |
| root_history_capacity: self.root_history_capacity, | |
| height: self.height, | |
| tree_type: self.tree_type as u64, | |
| capacity: 2u64.pow(self.height), | |
| ..Default::default() | |
| }; | |
| temp_metadata | |
| .queue_batches | |
| .init(self.batch_size, self.zkp_batch_size) | |
| .unwrap(); | |
| temp_metadata.queue_batches.bloom_filter_capacity = self.bloom_filter_capacity; | |
| temp_metadata.get_account_size().unwrap() | |
| } |
🧰 Tools
🪛 GitHub Check: lint
[failure] 71-71:
field assignment outside of initializer for an instance created with Default::default()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`
around lines 68 - 81, Clippy flags field_reassign_with_default in
calculate_size: instead of creating temp_metadata with
BatchedMerkleTreeMetadata::default() and then reassigning fields, construct
temp_metadata as a single literal with the required fields set
(root_history_capacity, height, tree_type, capacity, queue_batches with
bloom_filter_capacity and initialized via queue_batches.init or by building the
QueueBatches value before composing the metadata) so you no longer mutate
default fields; keep the call to queue_batches.init (or build the initialized
QueueBatches instance) and then call get_account_size() on the assembled
BatchedMerkleTreeMetadata.
| pub fn get_queue_and_tree_accounts<'b, 'info>( | ||
| accounts: &'b mut [AcpAccount<'info>], | ||
| accounts: &'b [AcpAccount<'info>], | ||
| queue_index: usize, | ||
| tree_index: usize, | ||
| ) -> std::result::Result<(&'b mut AcpAccount<'info>, &'b mut AcpAccount<'info>), SystemProgramError> | ||
| { | ||
| let (smaller, bigger) = if queue_index < tree_index { | ||
| (queue_index, tree_index) | ||
| } else { | ||
| (tree_index, queue_index) | ||
| }; | ||
| let (left, right) = accounts.split_at_mut(bigger); | ||
| let smaller_ref = &mut left[smaller]; | ||
| let bigger_ref = &mut right[0]; | ||
| Ok(if queue_index < tree_index { | ||
| (smaller_ref, bigger_ref) | ||
| } else { | ||
| (bigger_ref, smaller_ref) | ||
| }) | ||
| ) -> std::result::Result<(&'b AcpAccount<'info>, &'b AcpAccount<'info>), SystemProgramError> { | ||
| Ok((&accounts[queue_index], &accounts[tree_index])) |
There was a problem hiding this comment.
Guard against out-of-bounds access when indexing accounts
Direct indexing will panic on malformed inputs even though the function returns Result. Prefer get() and return a SystemProgramError to avoid aborting the program.
🛠️ Suggested fix
pub fn get_queue_and_tree_accounts<'b, 'info>(
accounts: &'b [AcpAccount<'info>],
queue_index: usize,
tree_index: usize,
) -> std::result::Result<(&'b AcpAccount<'info>, &'b AcpAccount<'info>), SystemProgramError> {
- Ok((&accounts[queue_index], &accounts[tree_index]))
+ let queue = accounts
+ .get(queue_index)
+ .ok_or(SystemProgramError::InvalidAccount)?;
+ let tree = accounts
+ .get(tree_index)
+ .ok_or(SystemProgramError::InvalidAccount)?;
+ Ok((queue, tree))
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@programs/system/src/utils.rs` around lines 28 - 33, The function
get_queue_and_tree_accounts currently uses direct indexing
(&accounts[queue_index], &accounts[tree_index]) which can panic on out-of-bounds
input; update it to use accounts.get(queue_index) and accounts.get(tree_index),
validate both Option results, and return an appropriate SystemProgramError
(e.g., an OutOfBounds or InvalidAccountIndex variant) wrapped in Err when either
index is missing so the function never aborts on malformed input.
Summary by CodeRabbit
New Features
Refactor
Tests
Documentation