@@ -86,8 +86,17 @@ Groq is a cloud API provider that uses custom LPUs (Language Processing Units) t
8686| ` llama-3.3-70b-versatile ` | 128k | ** Best Overall** . High intelligence. |
8787| ` llama-3.1-8b-instant ` | 128k | ** Fastest** . Quick grammar fixes. |
8888| ` meta-llama/llama-4-scout-17b-16e-instruct ` | 128k | New scout model. Good balance. |
89+ | ` meta-llama/llama-4-maverick-17b-128e-instruct ` | 128k | New maverick model. Better reasoning. |
8990| ` qwen/qwen3-32b ` | 128k | Good speed and logic. |
91+ | ` openai/gpt-oss-120b ` | 128k | Large OSS GPT model. |
92+ | ` openai/gpt-oss-20b ` | 128k | Fast OSS GPT model. |
93+ | ` groq/compound ` | 128k | Deep-thinking internal model. |
9094| ` groq/compound-mini ` | 128k | Optimized internal model. |
95+ | ` moonshotai/kimi-k2-instruct ` | 128k | Lightweight reasoning model. |
96+ | ` moonshotai/kimi-k2-instruct-0905 ` | 128k | Specialized kimi instruction model. |
97+ | ` canopylabs/orpheus-v1-english ` | 128k | CanopyLabs English priority model. |
98+ | ` canopylabs/orpheus-arabic-saudi ` | 128k | CanopyLabs Arabic dialect model. |
99+ | ` allam-2-7b ` | 128k | Efficient 7B general use model. |
91100
92101---
93102
@@ -106,10 +115,22 @@ Groq is a cloud API provider that uses custom LPUs (Language Processing Units) t
106115### Available Models
107116| Model ID | Description |
108117| :--- | :--- |
109- | ` gemini-flash-latest ` | ** Default** . Fast and capable. |
118+ | ` gemini-2.5-flash ` | ** Fastest** . Great for quick tasks. |
119+ | ` gemini-2.5-pro ` | High intelligence. Best overall quality. |
120+ | ` gemini-flash-latest ` | Fast and capable (latest flash). |
121+ | ` gemini-flash-lite-latest ` | Lightweight flash variant. |
122+ | ` gemini-pro-latest ` | Latest pro model. |
123+ | ` gemini-3.1-pro-preview ` | Next-gen pro preview. |
124+ | ` gemini-3.1-pro-preview-customtools ` | Pro preview with custom tools support. |
125+ | ` gemini-3-pro-preview ` | Gemini 3 pro preview. |
126+ | ` gemini-3-flash-preview ` | Gemini 3 flash preview. |
127+ | ` deep-research-pro-preview-12-2025 ` | Deep research specialized model. |
110128| ` gemma-3-27b-it ` | Large Gemma model. High quality. |
129+ | ` gemma-3-12b-it ` | Mid-size Gemma model. Good balance. |
130+ | ` gemma-3-4b-it ` | Compact Gemma model. Fast. |
131+ | ` gemma-3-1b-it ` | Smallest Gemma model. Ultra-light. |
111132| ` gemma-3n-e4b-it ` | Efficient Gemma variant (4B). |
112- | ` gemma-3n-e2b-it ` | Smallest Gemma variant (2B). |
133+ | ` gemma-3n-e2b-it ` | Efficient Gemma variant (2B). |
113134
114135---
115136
0 commit comments