feat: add OOM pre-check for vision models and fix InternVL image dime…#1253
Open
feat: add OOM pre-check for vision models and fix InternVL image dime…#1253
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request introduces an OOM (Out of Memory) pre-check mechanism for Qwen-family vision models by performing a dummy forward pass with worst-case image dimensions during initialization. It updates several model implementations to support this check and refactors the ViT model to derive inference parameters from the configuration instead of environment variables. Feedback focuses on improving memory management within the pre-check function by explicitly deleting tensors and clearing the CUDA cache, as well as refining exception handling and logging practices.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
sufubao
added a commit
that referenced
this pull request
Apr 16, 2026
Design spec for eliminating the multimodal OOM class that surfaced with Qwen3.5-VL. Replaces PR #1253 in full: absorbs its Qwen stress helpers (minus the empty_cache call that released the measured peak), adds the min-max bug fix at visualserver/manager.py:87, tightens visual+audio concurrency semaphores from x8 to x1, ports _check_decode_infer from origin/qw35_stable, and re-shapes the LLM init into a two-pass probe-measure-rebuild-validate auto-profile that eliminates --mem_fraction as a tuning knob. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sufubao
added a commit
that referenced
this pull request
Apr 16, 2026
…ening - fix min/max bug at visualserver/manager.py:87 — was silently capping per-DP visual batch size to 1 regardless of --visual_infer_batch_size - tighten visual and audio runtime semaphores from *8 to *1 so runtime concurrency never exceeds the stress-tested peak - add per-model _check_max_len_infer for Qwen2_VL, Qwen2_5_VL, Qwen3_VL, Qwen3_omni_moe (absorbed from PR #1253) - qwen_vl_check_max_len_infer deliberately omits torch.cuda.empty_cache so the stress peak stays pinned in the caching allocator at the driver level for the rest of process lifetime — this is the reserve-then-yield contract that lets the LLM's later profile_size see peer reservations - wire _check_max_len_infer call site into visual model_rpc.py::exposed_init_model with hasattr gate and warning log for uncovered model types - absorb PR #1253's config-driven worst-case derivation for InternVL - port _check_decode_infer helper from origin/qw35_stable into basemodel.py (not yet called from __init__ — Commit 2 wires it in) Part of the multimodal OOM fix. See docs/superpowers/specs/2026-04-16-multimodal-oom-fix-design.md for rationale. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 tasks
sufubao
added a commit
that referenced
this pull request
Apr 16, 2026
…ening - fix min/max bug at visualserver/manager.py:87 — was silently capping per-DP visual batch size to 1 regardless of --visual_infer_batch_size - tighten visual and audio runtime semaphores from *8 to *1 so runtime concurrency never exceeds the stress-tested peak - add per-model _check_max_len_infer for Qwen2_VL, Qwen2_5_VL, Qwen3_VL, Qwen3_omni_moe (absorbed from PR #1253) - qwen_vl_check_max_len_infer deliberately omits torch.cuda.empty_cache so the stress peak stays pinned in the caching allocator at the driver level for the rest of process lifetime — this is the reserve-then-yield contract that lets the LLM's later profile_size see peer reservations - wire _check_max_len_infer call site into visual model_rpc.py::exposed_init_model with hasattr gate and warning log for uncovered model types - absorb PR #1253's config-driven worst-case derivation for InternVL - port _check_decode_infer helper from origin/qw35_stable into basemodel.py (not yet called from __init__ — Commit 2 wires it in) Part of the multimodal OOM fix. See docs/superpowers/specs/2026-04-16-multimodal-oom-fix-design.md for rationale.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
…nsion handling