Skip to content

Conversation

@amitsrivastava78
Copy link
Collaborator

Support

  1. New loading function for the orbax checkpointing
  2. Support sharding feature

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @amitsrivastava78, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Keras's checkpoint loading capabilities by introducing a dedicated Model.load() method for Orbax checkpoints. This new functionality intelligently handles checkpoint directories, automatically identifying and loading the latest state. Crucially, it incorporates advanced sharding support for the JAX backend, allowing models to be loaded and resharded dynamically to match the current distribution configuration. Additionally, the Model.export() method has been refined by removing the LiteRT export option, focusing on core export functionalities.

Highlights

  • New Model.load() method for Orbax Checkpoints: A new load method has been added to the Model class, enabling direct loading of Orbax checkpoints. This method can automatically detect the latest checkpoint in a directory or load a specific step.
  • Sharding Support for JAX Backend: The Model.load() method now supports automatic resharding of data when loading Orbax checkpoints on the JAX backend, adapting to the current distribution strategy.
  • Refined Model.export() Functionality: The Model.export() method has been updated to remove support for LiteRT export, streamlining the available export formats.
  • Comprehensive Testing for Loading and Sharding: Extensive new tests have been added to orbax_checkpoint_test.py to validate the new Model.load() method, including synchronous/asynchronous saving, layout preservation, and JAX-specific resharding and distributed checkpoint structure verification.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality for Orbax checkpoint loading and sharding support. The changes include a new Model.load() method, helper functions for checkpoint discovery, and a comprehensive suite of new tests. The implementation is generally solid and well-tested. My feedback focuses on improving code maintainability by reducing duplication and enhancing error handling in the new test cases.

@codecov-commenter
Copy link

codecov-commenter commented Dec 8, 2025

Codecov Report

❌ Patch coverage is 68.42105% with 24 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.66%. Comparing base (f0a48a6) to head (f12b1a2).
⚠️ Report is 24 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/callbacks/orbax_checkpoint.py 57.69% 9 Missing and 2 partials ⚠️
keras/src/saving/saving_api.py 70.00% 2 Missing and 4 partials ⚠️
keras/src/saving/orbax_util.py 73.33% 3 Missing and 1 partial ⚠️
keras/src/utils/module_utils.py 80.00% 1 Missing and 1 partial ⚠️
keras/src/saving/saving_lib.py 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21903      +/-   ##
==========================================
+ Coverage   76.30%   82.66%   +6.36%     
==========================================
  Files         580      589       +9     
  Lines       60029    61298    +1269     
  Branches     9432     9610     +178     
==========================================
+ Hits        45803    50673    +4870     
+ Misses      11750     8136    -3614     
- Partials     2476     2489      +13     
Flag Coverage Δ
keras 82.49% <67.10%> (+6.32%) ⬆️
keras-jax 61.70% <65.78%> (-0.43%) ⬇️
keras-numpy 56.91% <27.63%> (-0.41%) ⬇️
keras-openvino 37.21% <26.31%> (+2.91%) ⬆️
keras-tensorflow 63.86% <61.84%> (?)
keras-torch 62.60% <61.84%> (-0.62%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

- Remove complex JAX abstract pytree logic that was causing 'free(): invalid pointer' errors
- Use preservation mode for all backends to avoid state structure mismatches
- This prevents memory corruption when loading checkpoints with different optimizer states
- Replace bare 'except:' with specific 'except (ImportError, AttributeError):'
  for distribution import patterns
- This improves error handling by only catching expected exceptions
- Extract duplicated tensor conversion logic into _to_numpy() helper method
- Replace duplicated code blocks in optimizer and metrics variable comparisons
- Improves maintainability and reduces code duplication
- Add multi-host support using orbax.checkpoint.multihost APIs
- Remove manual sync calls around save operations
- Use proper Orbax v1 APIs instead of brittle file inspection
- Fix 80-column line length violations in test files
- Ensure cross-backend compatibility with appropriate test skipping
- Clarify checkpoint directory terminology in documentation
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

Can you tighten the orbax_checkpoint_test.py file. It's extremely long and hard to follow. I think:

  • a lot fewer tests could cover basically the same
  • some parameterized tests could minimize code duplication
  • the verification blocks could be much shorter using keras.tree and self.assertAllClose, I gave some examples.

Copy link

@orbax-dev orbax-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, it won't let me approve with comments.

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of complications I didn't think about:

- Remove redundant try/except blocks in favor of LazyModule error handling
- Use ocp.multihost directly from LazyModule instead of custom function
- Remove unnecessary dummy apply_gradients in Orbax loading (confirmed required)
- Update sync key to be process-safe
- Remove unused test case for asset support
- Improve LazyModule to expose multihost from parent orbax.checkpoint module
- Remove manual directory cleanup in _save_checkpoint that interfered with Orbax preservation policies
- Simplify preservation policy setup to use LatestN directly instead of AnyPreservationPolicy wrapper
- Update asset directory structure to use checkpoint_dir/assets/step/ format
- Add comprehensive asset saving/loading tests for both sync and async modes
- Make test_save_freq_epoch more robust by checking for numeric checkpoint names rather than specific epoch
- Fix asset loading to handle new directory structure in saving_api.py

All Orbax checkpoint tests now pass on both JAX and TensorFlow backends.
- Remove manual directory cleanup in _save_checkpoint that interfered with Orbax preservation policies
- Simplify preservation policy setup to use LatestN directly instead of AnyPreservationPolicy wrapper
- Update asset directory structure to use checkpoint_dir/assets/step/ format
- Add comprehensive asset saving/loading tests for both sync and async modes
- Make test_save_freq_epoch more robust by checking for numeric checkpoint names rather than specific epoch
- Fix asset loading to handle new directory structure in saving_api.py

All Orbax checkpoint tests now pass on both JAX and TensorFlow backends.
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see where the hook is to support model.load_weights. Did I miss something?

- Save custom layer assets (binary data, strings, arrays) directly in the pytree as base64-encoded strings for Orbax compatibility
- Remove separate asset file saving to eliminate synchronization races
- Update loading to extract assets from pytree and decode back to original types
- Modify tests to verify asset loading without directory checks
- Ensures atomic saves with proper preservation policy handling
…points

- Add asset loading logic to saving_api.load_weights for Orbax checkpoints
- Include assets in weights-only Orbax checkpoints
- Fix layer naming in MockLayerWithAssets test to avoid conflicts
- Add test for load_weights with assets from Orbax checkpoints
@amitsrivastava78
Copy link
Collaborator Author

I didn't see where the hook is to support model.load_weights. Did I miss something?

Added asset loading logic to saving_api.load_weights for Orbax checkpoints
Included assets in weights-only Orbax checkpoints
Added test for load_weights with assets from Orbax checkpoints

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assets support won't work the way it is coded right now, see my comments in the code.

saving_lib._save_state shows how it works, and it's pretty specific:

  • the way it recurses in KerasSaveables (not layers)
  • the way it builds a name / path during the recursion
  • the way it calls save_assets
  • etc.

The easiest would be to reuse it while disabling the weights store (which I think you started).

The complication is that you would need to create a temp folder and then call save_assets and then enumerate files, read them, create a sub-key and base64 encode the content.

And the reverse when reloading...

Also, for unit tests, you should test with a StringLookup or IntegerLookup layer to make sure it works end to end.


Now... honestly, this is neither indispensable nor urgent.

Let's first submit this PR without model config and without model assets (meaning reloading is only supported via load_weights).

Then let's decide on how to proceed. Because I think if we do want to support this, we need a bit more, like model.save should support this format as an option.


Last thing I thought about. model.save_weights should support the Orbax format for consistency.

- Fix asset collection to traverse all KerasSaveable objects recursively
- Use hierarchical asset storage with unique paths instead of layer names
- Move Orbax-specific functions to dedicated orbax.py module
- Ensure assets are collected from nested model components
- Maintain backward compatibility with existing checkpoint formats
- Simplify h5py import handling
- Remove _collect_assets_recursive and related asset collection logic from OrbaxCheckpoint
- Remove asset loading functions from orbax_util.py
- Update saving_api.py to remove asset-related imports
- Fix test to remove assets directory filtering
- Keep weights-only functionality for clean PR submission
- Asset support can be added later as separate enhancement
@amitsrivastava78
Copy link
Collaborator Author

The assets support won't work the way it is coded right now, see my comments in the code.

saving_lib._save_state shows how it works, and it's pretty specific:

  • the way it recurses in KerasSaveables (not layers)
  • the way it builds a name / path during the recursion
  • the way it calls save_assets
  • etc.

The easiest would be to reuse it while disabling the weights store (which I think you started).

The complication is that you would need to create a temp folder and then call save_assets and then enumerate files, read them, create a sub-key and base64 encode the content.

And the reverse when reloading...

Also, for unit tests, you should test with a StringLookup or IntegerLookup layer to make sure it works end to end.

Now... honestly, this is neither indispensable nor urgent.

Let's first submit this PR without model config and without model assets (meaning reloading is only supported via load_weights).

Then let's decide on how to proceed. Because I think if we do want to support this, we need a bit more, like model.save should support this format as an option.

Last thing I thought about. model.save_weights should support the Orbax format for consistency.

Thank you for the detailed feedback on the asset support implementation. You're absolutely right that the current approach was flawed and wouldn't work properly.

I've addressed your suggestions by removing asset support entirely from this PR, as you recommended. The implementation now focuses on clean weights-only functionality for Orbax checkpoints.

@hertschuh
Copy link
Collaborator

Thank you for the detailed feedback on the asset support implementation. You're absolutely right that the current approach was flawed and wouldn't work properly.

I've addressed your suggestions by removing asset support entirely from this PR, as you recommended. The implementation now focuses on clean weights-only functionality for Orbax checkpoints.

Oh, but you need to remove the config support also. They go together. Either you have just the weights, or you have the model definition, but the model definition is a combination of the config and the assets.

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR UI had a bug, some comments are duplicated.

Also, you need to remove the config support, and therefore the save_weights_only=False option. That's because config and assets go together. Either you have just the weights, or you have the model definition, but the model definition is a combination of the config and the assets.

- Remove save_weights_only option from OrbaxCheckpoint callback
- Always save full model state (weights + model_config + optimizer + metrics)
- Replace h5py import with LazyModule for optional dependency support
- Remove unused _load_model_from_orbax_checkpoint function
- Rename utility functions to follow naming standards:
  * _is_orbax_checkpoint -> is_orbax_checkpoint
  * _find_latest_orbax_checkpoint -> find_latest_orbax_checkpoint
- Update all imports and usages accordingly
- Clean up tests to remove save_weights_only parameter variations
- Ensure 'import keras' works without h5py installed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants