fix: [modelopt 0.43][GH200][llm_ptq - autoquant / trtllm] Llama-3 (#5997832)#1079
Draft
fix: [modelopt 0.43][GH200][llm_ptq - autoquant / trtllm] Llama-3 (#5997832)#1079
Conversation
…ant / trtll Signed-off-by: Pensieve Bot <pensieve-bot@nvidia.com>
Collaborator
Author
|
/ok to test e6f7a20 |
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #5997832
Summary
When serving a quantized Llama-3.1-8B-Instruct model with int4_awq_fp8_bits_6 configuration using TensorRT-LLM, the inference fails with ValueError indicating that QuantConfig object has no field 'quantized_layers'. The error occurs during model loading when TensorRT-LLM attempts to read the hf_quant_config.json and set quantization parameters.
Root Cause
The quantized model export produces an hf_quant_config.json that includes a 'quantized_layers' field mapping layer names to their quantization algorithms. However, the QuantConfig class definition in modelopt/torch/quantization/config.py (which is a Pydantic model) does not have this field defined, causing deserialization/validation to fail when TensorRT-LLM tries to instantiate or read this configuration during inference.
Agent Fix Summary
Fixed GitHub issue: TensorRT-LLM inference failed with ValueError for quantized_layers field.
Root cause: The hf_quant_config.json export file contained a 'quantized_layers' field that TensorRT-LLM's QuantConfig Pydantic model doesn't recognize.
Solution: Modified modelopt/torch/export/unified_export_hf.py to remove the 'quantized_layers' field before saving the hf_quant_config.json file, while preserving all other essential quantization information.
Changes:
The fix is minimal, focused, backward compatible, and doesn't affect other export paths. It ensures TensorRT-LLM can successfully load and deserialize the quantization config for mixed-precision models.
Files Changed
modelopt/torch/export/unified_export_hf.pyReproduction
To validate on a Slurm cluster, save the files below under
tools/launcher/in Model-Optimizer and run:cd tools/launcher uv run launch.py --yaml examples/triage/test_hf_quant_config_compat.yaml --yescd tools/launcher uv run launch.py --yaml examples/triage/test_export_quantized_layers_fix.yaml --yescd tools/launcher uv run launch.py --yaml examples/triage/test_quantized_layers_fix.yaml --yestools/launcher/examples/triage/test_hf_quant_config_compat.shtools/launcher/examples/triage/test_export_quantized_layers_fix.shtools/launcher/examples/triage/test_hf_quant_config_compat.yamltools/launcher/examples/triage/test_export_quantized_layers_fix.yamltools/launcher/examples/triage/test_quantized_layers_fix.yamltools/launcher/examples/triage/test_quantized_layers_fix.shAuto-generated by pensieve
/magic-triageagentic fix — please review before merging.