-
Notifications
You must be signed in to change notification settings - Fork 19.7k
Orbax Loading and Sharding Support feature #21903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Orbax Loading and Sharding Support feature #21903
Conversation
Summary of ChangesHello @amitsrivastava78, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Keras's checkpoint loading capabilities by introducing a dedicated Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant new functionality for Orbax checkpoint loading and sharding support. The changes include a new Model.load() method, helper functions for checkpoint discovery, and a comprehensive suite of new tests. The implementation is generally solid and well-tested. My feedback focuses on improving code maintainability by reducing duplication and enhancing error handling in the new test cases.
ee32491 to
10370f5
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #21903 +/- ##
==========================================
+ Coverage 76.30% 82.66% +6.36%
==========================================
Files 580 589 +9
Lines 60029 61298 +1269
Branches 9432 9610 +178
==========================================
+ Hits 45803 50673 +4870
+ Misses 11750 8136 -3614
- Partials 2476 2489 +13
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
10370f5 to
325a03f
Compare
325a03f to
43d45d0
Compare
- Remove complex JAX abstract pytree logic that was causing 'free(): invalid pointer' errors - Use preservation mode for all backends to avoid state structure mismatches - This prevents memory corruption when loading checkpoints with different optimizer states
- Replace bare 'except:' with specific 'except (ImportError, AttributeError):' for distribution import patterns - This improves error handling by only catching expected exceptions
- Extract duplicated tensor conversion logic into _to_numpy() helper method - Replace duplicated code blocks in optimizer and metrics variable comparisons - Improves maintainability and reduces code duplication
c592a27 to
d8a86e8
Compare
- Add multi-host support using orbax.checkpoint.multihost APIs - Remove manual sync calls around save operations - Use proper Orbax v1 APIs instead of brittle file inspection - Fix 80-column line length violations in test files - Ensure cross-backend compatibility with appropriate test skipping - Clarify checkpoint directory terminology in documentation
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
Can you tighten the orbax_checkpoint_test.py file. It's extremely long and hard to follow. I think:
- a lot fewer tests could cover basically the same
- some parameterized tests could minimize code duplication
- the verification blocks could be much shorter using
keras.treeandself.assertAllClose, I gave some examples.
orbax-dev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, it won't let me approve with comments.
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of complications I didn't think about:
- Remove redundant try/except blocks in favor of LazyModule error handling - Use ocp.multihost directly from LazyModule instead of custom function - Remove unnecessary dummy apply_gradients in Orbax loading (confirmed required) - Update sync key to be process-safe - Remove unused test case for asset support - Improve LazyModule to expose multihost from parent orbax.checkpoint module
- Remove manual directory cleanup in _save_checkpoint that interfered with Orbax preservation policies - Simplify preservation policy setup to use LatestN directly instead of AnyPreservationPolicy wrapper - Update asset directory structure to use checkpoint_dir/assets/step/ format - Add comprehensive asset saving/loading tests for both sync and async modes - Make test_save_freq_epoch more robust by checking for numeric checkpoint names rather than specific epoch - Fix asset loading to handle new directory structure in saving_api.py All Orbax checkpoint tests now pass on both JAX and TensorFlow backends.
- Remove manual directory cleanup in _save_checkpoint that interfered with Orbax preservation policies - Simplify preservation policy setup to use LatestN directly instead of AnyPreservationPolicy wrapper - Update asset directory structure to use checkpoint_dir/assets/step/ format - Add comprehensive asset saving/loading tests for both sync and async modes - Make test_save_freq_epoch more robust by checking for numeric checkpoint names rather than specific epoch - Fix asset loading to handle new directory structure in saving_api.py All Orbax checkpoint tests now pass on both JAX and TensorFlow backends.
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see where the hook is to support model.load_weights. Did I miss something?
- Save custom layer assets (binary data, strings, arrays) directly in the pytree as base64-encoded strings for Orbax compatibility - Remove separate asset file saving to eliminate synchronization races - Update loading to extract assets from pytree and decode back to original types - Modify tests to verify asset loading without directory checks - Ensures atomic saves with proper preservation policy handling
…points - Add asset loading logic to saving_api.load_weights for Orbax checkpoints - Include assets in weights-only Orbax checkpoints - Fix layer naming in MockLayerWithAssets test to avoid conflicts - Add test for load_weights with assets from Orbax checkpoints
Added asset loading logic to saving_api.load_weights for Orbax checkpoints |
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assets support won't work the way it is coded right now, see my comments in the code.
saving_lib._save_state shows how it works, and it's pretty specific:
- the way it recurses in
KerasSaveables (not layers) - the way it builds a name / path during the recursion
- the way it calls
save_assets - etc.
The easiest would be to reuse it while disabling the weights store (which I think you started).
The complication is that you would need to create a temp folder and then call save_assets and then enumerate files, read them, create a sub-key and base64 encode the content.
And the reverse when reloading...
Also, for unit tests, you should test with a StringLookup or IntegerLookup layer to make sure it works end to end.
Now... honestly, this is neither indispensable nor urgent.
Let's first submit this PR without model config and without model assets (meaning reloading is only supported via load_weights).
Then let's decide on how to proceed. Because I think if we do want to support this, we need a bit more, like model.save should support this format as an option.
Last thing I thought about. model.save_weights should support the Orbax format for consistency.
- Fix asset collection to traverse all KerasSaveable objects recursively - Use hierarchical asset storage with unique paths instead of layer names - Move Orbax-specific functions to dedicated orbax.py module - Ensure assets are collected from nested model components - Maintain backward compatibility with existing checkpoint formats - Simplify h5py import handling
- Remove _collect_assets_recursive and related asset collection logic from OrbaxCheckpoint - Remove asset loading functions from orbax_util.py - Update saving_api.py to remove asset-related imports - Fix test to remove assets directory filtering - Keep weights-only functionality for clean PR submission - Asset support can be added later as separate enhancement
Thank you for the detailed feedback on the asset support implementation. You're absolutely right that the current approach was flawed and wouldn't work properly. I've addressed your suggestions by removing asset support entirely from this PR, as you recommended. The implementation now focuses on clean weights-only functionality for Orbax checkpoints. |
Oh, but you need to remove the config support also. They go together. Either you have just the weights, or you have the model definition, but the model definition is a combination of the config and the assets. |
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR UI had a bug, some comments are duplicated.
Also, you need to remove the config support, and therefore the save_weights_only=False option. That's because config and assets go together. Either you have just the weights, or you have the model definition, but the model definition is a combination of the config and the assets.
- Remove save_weights_only option from OrbaxCheckpoint callback - Always save full model state (weights + model_config + optimizer + metrics) - Replace h5py import with LazyModule for optional dependency support - Remove unused _load_model_from_orbax_checkpoint function - Rename utility functions to follow naming standards: * _is_orbax_checkpoint -> is_orbax_checkpoint * _find_latest_orbax_checkpoint -> find_latest_orbax_checkpoint - Update all imports and usages accordingly - Clean up tests to remove save_weights_only parameter variations - Ensure 'import keras' works without h5py installed
Support