Skip to content

refactor: remove Java-side dataset cache, rely on Rust-side Session#353

Merged
hamersaw merged 12 commits intolance-format:mainfrom
LuciferYang:consolidate-read-caches-335
Apr 14, 2026
Merged

refactor: remove Java-side dataset cache, rely on Rust-side Session#353
hamersaw merged 12 commits intolance-format:mainfrom
LuciferYang:consolidate-read-caches-335

Conversation

@LuciferYang
Copy link
Copy Markdown
Contributor

@LuciferYang LuciferYang commented Mar 27, 2026

Summary

  • Delete LanceDatasetCache; the Rust-side Session already caches metadata and index blocks, so a Java-side Dataset cache only saves manifest deserialization (microseconds).
  • Load fragment metadata on demand via dataset.getFragment(id) per partition, instead of eagerly building Map<Integer, Fragment> from dataset.getFragments() on every cache miss — O(N) in total fragment count, even though each partition reads one fragment.
  • Each LanceFragmentScanner opens, owns, and closes its own Dataset through a new LanceRuntime.openDataset() helper.

Changes

File Change
LanceDatasetCache.java Deleted (~291 lines).
LanceRuntime.java Add openDataset() — wires the catalog Session, reconstructs the namespace from the (namespaceImpl, namespaceProperties) strings carried in LanceInputPartition, merges storage options, and pins the version.
LanceFragmentScanner.java create() opens its own Dataset and loads only the target fragment; close() closes both scanner and dataset, using Throwable.addSuppressed so the first failure isn't masked by cleanup.

Before / after

Before (main)                          After
-------------                          -----
LanceFragmentScanner.create()          LanceFragmentScanner.create()
  LanceDatasetCache.getFragment()        LanceRuntime.openDataset()   // per partition
    Guava cache.get()                      (Rust Session: metadata + index)
      new CachedDataset(dataset)           dataset.getFragment(id)    // single fragment
        dataset.getFragments()  ← O(N)
    cached.getFragment(id)

The per-catalog Rust Session (configured via LANCE_INDEX_CACHE_SIZE / LANCE_METADATA_CACHE_SIZE) and the global Arrow BufferAllocator are unchanged.

@github-actions github-actions bot added the performance Features that improves performance label Mar 27, 2026
Copy link
Copy Markdown
Collaborator

@hamersaw hamersaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it would make sense to just remove the LanceDatasetCache and house fragment caching in the LanceRuntime? It feels like there's just multiple layers of cache here.

@LuciferYang
Copy link
Copy Markdown
Contributor Author

Do you think it would make sense to just remove the LanceDatasetCache and house fragment caching in the LanceRuntime? It feels like there's just multiple layers of cache here.

625df7c followed this approach

@hamersaw
Copy link
Copy Markdown
Collaborator

I think overall this looks reasonable, but we need some relatively comprehensive benchmarks to support any performance implications.

…ches-335

# Conflicts:
#	lance-spark-base_2.12/src/main/java/org/lance/spark/internal/LanceDatasetCache.java
#	lance-spark-base_2.12/src/main/java/org/lance/spark/read/LanceCountStarPartitionReader.java
Replace non-existent LanceNamespaceStorageOptionsProvider with
LanceNamespace from getOrCreateNamespace(), matching the namespace
API used by OpenDatasetBuilder. Add missing java.util.List import.
Adds FragmentLoadingBenchmarkTest to quantify the performance
difference between getFragments() (old eager approach, O(N)) and
getFragment(id) (new lazy approach, O(1)).

Results on datasets with 10-1000 fragments show 10x-609x speedup
for the lazy approach, confirming the motivation for PR lance-format#353.

Tagged with @tag("benchmark") to exclude from normal test runs.
Add test.excludedGroups property (default: benchmark) to surefire
config so @tag("benchmark") tests are excluded from normal mvn test
runs. Override with -Dtest.excludedGroups= to include them.

Run benchmarks with:
  mvn test -Dtest=FragmentLoadingBenchmarkTest \
    -Dtest.excludedGroups= -Dgroups=benchmark
* <p>Tagged with "benchmark" so it is excluded from normal test runs.
*/
@Tag("benchmark")
public class FragmentLoadingBenchmarkTest {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this microbenchmark sufficient to demonstrate the effect? We can run

mvn test -pl lance-spark-base_2.12 \
  -Dtest=FragmentLoadingBenchmarkTest \
  -Dtest.excludedGroups= \
  -Dgroups=benchmark

to verify:

=== Fragment Loading Benchmark ===
Fragments    |  getFragments() (ms) | getFragment(id) (ms) |    Speedup
----------------------------------------------------------------------
10           |             0.082 ms |             0.007 ms |      11.0x
50           |             0.329 ms |             0.008 ms |      43.2x
100          |             0.630 ms |             0.010 ms |      65.9x
500          |             2.929 ms |             0.008 ms |     380.9x
1000         |             6.064 ms |             0.008 ms |     760.4x

Notes:

  • getFragments(): loads ALL fragment metadata (old eager approach)
  • getFragment(id): loads ONE fragment by ID (new lazy approach)
  • Each worker partition only needs one fragment, so the lazy approach avoids
    loading metadata for all other fragments in the dataset.

The project does not integrate JMH yet, so this benchmark is relatively simple.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If end-to-end impact is needed, we are running tests on a 1TB TPC-DS dataset, but this improvement likely won’t show a noticeable difference there.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@summaryzb Do you have any test results about this ?

BatchScanExec.equals() compares batch objects via equals(), which
delegates to LanceScan since it implements Batch. Without overriding
equals/hashCode, Object identity is used, so two scans of the same
table are never equal and Spark cannot reuse exchanges.

Compare schema, readOptions, filters, limit, offset, topN, and
aggregation. Exclude scanId (per-instance UUID for tracing only).
@hamersaw
Copy link
Copy Markdown
Collaborator

Ok, so diving a little deeper. The rust-side implementation has it's own caches; for metadata (LANCE_METADATA_CACHE_SIZE) and indexes (LANCE_INDEX_CACHE_SIZE). By caching the actual dataset handle, we're really only saving time in deserializing the dataset manifest bytes, which should be measureable in microseconds (or milliseconds at worst). I'm wondering if we should just remove the notion of Spark-side caches for everything (dataset + fragment) and rely on the rust-side Lance cache. This is basically what the LanceRuntime handles, rust-size Lance cache.

@LuciferYang
Copy link
Copy Markdown
Contributor Author

That makes sense. Since the Session already caches metadata and index blocks on the Rust side, the Java-side LoadingCache<DatasetCacheKey, Dataset> only saves manifest deserialization time. I agree we should remove it and let each scanner open/close its own Dataset handle, relying on the Rust-side cache for the heavy lifting. I'll update the pr to:

  1. Remove the Guava LoadingCache, DatasetCacheKey, getCachedDataset(), and getFragment()
  2. Have LanceFragmentScanner open its own Dataset (via openDataset()) and close it in close()
  3. Keep the per-catalog Session caching as-is

I'll also enhance openDataset() to honor per-query blockSize/indexCacheSize/metadataCacheSize settings in the same change — the cache path currently silently ignores these, and since we're already rewiring the open path, this is the natural place to fix it.

Do you think this is reasonable?

@hamersaw
Copy link
Copy Markdown
Collaborator

I'll also enhance openDataset() to honor per-query blockSize/indexCacheSize/metadataCacheSize settings in the same change — the cache path currently silently ignores these, and since we're already rewiring the open path, this is the natural place to fix it.

Do you think this is reasonable?

I'm not following exactly where these are going to be inserted. This gets a little bit tricky because these a global session configuration options and may not be reasonably applied to a single dataset? Maybe it's worth hacking together a proposal and we can iterate?

@LuciferYang LuciferYang changed the title perf: consolidate read caches and remove eager fragment pre-loading refactor: remove Java-side dataset cache, rely on Rust-side Session Apr 14, 2026
@LuciferYang
Copy link
Copy Markdown
Contributor Author

@hamersaw I have refactored the code, removed LanceDatasetCache, and updated the PR description. Can we do further iterations based on this version?

readOptions,
fragmentId,
dataset =
LanceRuntime.openDataset(
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me check if we can use the utility methods from Utils here.

inputPartition.getInitialStorageOptions(),
inputPartition.getNamespaceImpl(),
inputPartition.getNamespaceProperties());
dataset =
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hamersaw On your earlier concern about blockSize/indexCacheSize/metadataCacheSize: this pr doesn't add new API for them now, but switching to Utils.openDatasetBuilder does mean the fragment-scan path now passes these through (previously they were dropped by LanceDatasetCache). This matches what LanceCountStarPartitionReader and ~27 other call sites already do. WDYT?

The refactor to remove LanceDatasetCache dropped namespace reconstruction
on executors. Vended credentials (STS tokens) need the namespace client
to refresh during long-running scans.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@hamersaw hamersaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I think this looks great! I added the namespace build back in so that we can ensure credential refresh on long-running operations.

@hamersaw hamersaw merged commit be64654 into lance-format:main Apr 14, 2026
16 checks passed
@LuciferYang
Copy link
Copy Markdown
Contributor Author

Thank you @hamersaw

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Features that improves performance

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants