Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/user_manual/configure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ Underneath you can find the list of all the available datasets.
- ``text: str``
* - Image Generation
- `LAION256 <https://huggingface.co/datasets/nannullna/laion_subset>`_, `OpenImage <https://huggingface.co/datasets/data-is-better-together/open-image-preferences-v1>`_, `COCO <https://huggingface.co/datasets/phiyodr/coco2017>`_, `DrawBench <https://huggingface.co/datasets/sayakpaul/drawbench>`_, `PartiPrompts <https://huggingface.co/datasets/nateraw/parti-prompts>`_, `GenAIBench <https://huggingface.co/datasets/BaiqiL/GenAI-Bench>`_
- ``image_generation_collate``, ``prompt_collate``
- ``image_generation_collate``, ``prompt_with_auxiliaries_collate``
- ``text: str``, ``image: Optional[PIL.Image.Image]``
* - Image Classification
- `ImageNet <https://huggingface.co/datasets/zh-plus/tiny-imagenet>`_, `MNIST <https://huggingface.co/datasets/ylecun/mnist>`_, `CIFAR10 <https://huggingface.co/datasets/uoft-cs/cifar10>`_
Expand Down
42 changes: 42 additions & 0 deletions docs/user_manual/evaluate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,48 @@ Evaluation Components
The |pruna| package provides a variety of evaluation metrics to assess your models.
In this section, we'll introduce the evaluation metrics you can use.

.. _vlm_judge_metrics:

Vision-language judge metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Some **quality** metrics (for example ``vqa``, ``qa_accuracy``, ``alignment_score``, OCR-based text scores, ``vie_score``) use a **vision-language model** as a judge. By default they use a hosted API via ``litellm`` (``vlm_type="litellm"``); you can load a local Hugging Face model with ``vlm_type="transformers"``. When using string metric names with ``Task``, the default hosted route uses ``openai/gpt-4o`` unless you construct the metric explicitly.

**API keys (hosted judges).** Pruna passes ``api_key`` into LiteLLM using this order: the metric’s ``api_key`` argument (if set), then ``LITELLM_API_KEY``, then ``OPENAI_API_KEY``. That matches common usage for OpenAI routes: LiteLLM documents ``OPENAI_API_KEY`` for OpenAI; ``LITELLM_API_KEY`` is an extra env name Pruna checks so you can supply a key without using ``OPENAI_API_KEY``. If all three are unset, LiteLLM can still pick up provider-specific variables (for example ``ANTHROPIC_API_KEY``) as in LiteLLM’s “Setting API Keys” and provider docs—if you use a non-OpenAI route but have ``OPENAI_API_KEY`` set for other tools, pass ``api_key`` explicitly so Pruna does not forward the wrong key. Credentials for Replicate or other image-only backends are separate and are not used by these metrics.

**Hosted ``litellm``.** Set ``OPENAI_API_KEY`` or ``LITELLM_API_KEY`` (or pass ``api_key=...``) and use a vision-capable LiteLLM route as ``model_name``:

.. code-block:: python

from pruna.evaluation.metrics import VQAMetric

hosted = VQAMetric(vlm_type="litellm", model_name="openai/gpt-4o")

The same pattern works with ``get_vlm`` in ``pruna.evaluation.metrics.vlm_base``:

.. code-block:: python

from pruna.evaluation.metrics.vlm_base import get_vlm

vlm = get_vlm(vlm_type="litellm", model_name="openai/gpt-4o")

**Local ``transformers``.** Pass a Hugging Face model id as ``model_name``, set ``device``, and use ``vlm_kwargs`` with ``model_load_kwargs`` for ``from_pretrained`` (same pattern for any registry metric class):

.. code-block:: python

import torch

from pruna.evaluation.metrics import VQAMetric

local = VQAMetric(
vlm_type="transformers",
model_name="HuggingFaceTB/SmolVLM-256M-Instruct",
device="cpu",
vlm_kwargs={"model_load_kwargs": {"torch_dtype": torch.float32}},
)

Use ``Task(request=[hosted, ...], ...)`` or ``Task(request=[local, ...], ...)`` (or pass the metric instance wherever metrics are configured). Full constructor patterns and ``get_vlm`` helpers are documented in ``pruna.evaluation.metrics.vlm_base`` and each metric’s docstring.

EvaluationAgent Initialization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
7 changes: 6 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ dependencies = [
"peft>=0.18.0,<0.19.0",
"trl<=0.21.0",
"termcolor==2.3.0",
"realesrgan"
"realesrgan",
]

[project.optional-dependencies]
Expand All @@ -168,6 +168,10 @@ vllm = [
"vllm>=0.16.0",
"ray",
]
evaluation = [
"outlines>1.2.0,<2.0.0",
"litellm>=1.0.0",
]
Comment thread
davidberenstein1957 marked this conversation as resolved.
stable-fast = [
"xformers>=0.0.30",
"stable-fast-pruna>=1.0.8,<1.0.9",
Expand Down Expand Up @@ -224,6 +228,7 @@ dev = [
"types-PyYAML",
"logbar",
"pytest-xdist>=3.8.0",
"pruna[evaluation]",
]
cpu = []
lmharness = [
Expand Down
24 changes: 22 additions & 2 deletions src/pruna/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,13 @@
setup_hps_dataset,
setup_imgedit_dataset,
setup_long_text_bench_dataset,
setup_oneig_anime_stylization_dataset,
setup_oneig_dataset,
setup_oneig_general_object_dataset,
setup_oneig_knowledge_reasoning_dataset,
setup_oneig_multilingualism_dataset,
setup_oneig_portrait_dataset,
setup_oneig_text_rendering_dataset,
setup_parti_prompts_dataset,
)
from pruna.data.datasets.question_answering import setup_polyglot_dataset
Expand Down Expand Up @@ -103,19 +109,33 @@
"image_classification_collate",
{"img_size": 224},
),
"DrawBench": (setup_drawbench_dataset, "prompt_collate", {}),
"DrawBench": (setup_drawbench_dataset, "prompt_with_auxiliaries_collate", {}),
"PartiPrompts": (
setup_parti_prompts_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"GenAIBench": (setup_genai_bench_dataset, "prompt_collate", {}),
"GenAIBench": (setup_genai_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GenEval": (setup_geneval_dataset, "prompt_with_auxiliaries_collate", {}),
"HPS": (setup_hps_dataset, "prompt_with_auxiliaries_collate", {}),
"ImgEdit": (setup_imgedit_dataset, "prompt_with_auxiliaries_collate", {}),
"LongTextBench": (setup_long_text_bench_dataset, "prompt_with_auxiliaries_collate", {}),
"GEditBench": (setup_gedit_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIG": (setup_oneig_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGAnimeStylization": (
setup_oneig_anime_stylization_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGGeneralObject": (setup_oneig_general_object_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGKnowledgeReasoning": (
setup_oneig_knowledge_reasoning_dataset,
"prompt_with_auxiliaries_collate",
{},
),
"OneIGMultilingualism": (setup_oneig_multilingualism_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGPortrait": (setup_oneig_portrait_dataset, "prompt_with_auxiliaries_collate", {}),
"OneIGTextRendering": (setup_oneig_text_rendering_dataset, "prompt_with_auxiliaries_collate", {}),
"DPG": (setup_dpg_dataset, "prompt_with_auxiliaries_collate", {}),
"TinyIMDB": (setup_tiny_imdb_dataset, "text_generation_collate", {}),
"VBench": (setup_vbench_dataset, "prompt_with_auxiliaries_collate", {}),
Expand Down
Loading
Loading