Skip to content

Improve inference speed on the edge side #17948

@imjking

Description

@imjking

Hi, I exported the PTE model for 8255 edge side inference using the following command:
python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -m SA8255 --compile_only --decoder_model qwen3-0_6b --system_prompt "你是指令优化专家,负责将用户的口语化、模糊或多意图指令转化为清晰、可独立执行的指令。" --prompt "/no_think开一下哔哩哔哩把左边车窗关上" --model_mode hybrid --max_seq_len 512 --prefill_ar_len 64 --temperature 0.8 --artifact ./qwen3_0_6b_sa8255_full_sft
Now I want to improve the inference speed on the car device. Besides modifying the model_mode(kv/hybrid/lookahead) and max_seq_len(1024/512/256) variables, and using lower quantization precision(16a4w/16a8w/8a8w), are there any other methods to improve inference speed? Moreover, changing model_mode and using lower quantization precision will result in worse model inference performance.

Are there any other ways to improve inference speed?

cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: qnnIssues related to Qualcomm's QNN delegate and code under backends/qualcomm/partner: qualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions