Releases: BlockRunAI/ClawRouter
v0.12.171
v0.12.171 — Apr 29, 2026
- Three new free NVIDIA-hosted models added. BlockRun refreshed the free catalog on 2026-04-29 with three additions, all wired into ClawRouter as
free/-prefixed entries:free/deepseek-v4-pro— 1.6T MoE / 49B active, 1M context, MMLU-Pro 87.5, GPQA 90.1, SWE-bench 80.6, LiveCodeBench 93.5. NIM ~150 tok/s on Blackwell. Strongest free reasoning model.free/deepseek-v4-flash— 284B / 13B active MoE, 1M context, ~5x faster than v4-pro. Strong on chat/summarization (MMLU-Pro 86.2). Weaker factual recall (SimpleQA 34% vs Pro's 58%) — pick v4-pro for fact-heavy agentic loops.free/nemotron-3-nano-omni-30b-a3b-reasoning— 31B / 3.2B active MoE, 256K context. First vision-capable free model in the catalog. Accepts text, images, video (up to 2min), audio (up to 1hr). ChartQA 90.3, DocVQA 95.6, MMMU 70.8.
free/deepseek-v3.2phased out in favor offree/deepseek-v4-pro(strict-superset replacement: same family, larger context, higher benchmarks). Removed fromBLOCKRUN_MODELS,FREE_MODELSset,top-models.jsonpicker, README pricing table, and SKILL.md model list. Aliases kept and redirected:nvidia/deepseek-v3.2,free/deepseek-v3.2, anddeepseek-freenow all resolve tofree/deepseek-v4-proso existing pins continue to work and silently get the upgrade.gpt-oss-120b/gpt-oss-20bdeliberately kept as defaults despite BlockRun's 2026-04-28 retirement (available:falseserver-side). Heavy user demand outweighs the source-of-truth alignment for these specific IDs —free/nvidia/gpt-120b/gpt-20baliases all still resolve tofree/gpt-oss-120b(or 20b),FREE_MODELconstant still points atfree/gpt-oss-120b, andecoTiers.SIMPLEprimary stays unchanged. ClawRouter's existing fallback-chain logic handles any 400 ("Model not available") from BlockRun by trying the next chain entry, so failures degrade gracefully rather than break user workflows.- New shorthand aliases for the additions:
deepseek-v4-pro,deepseek-v4-flash,v4-pro,v4-flash,nemotron-omni,nano-omni,vision-free— chosen to mirror BlockRun's bare-name aliases atroute.ts:639-640plus avision-freediscovery shortcut for the new vision-capable model. ecoTiers.SIMPLEfallback chain extended with the three new free models (mistral-small, deepseek-v4-flash, qwen3-next) inserted before the paid Gemini fallbacks, so eco-profile users get more all-free chain depth before paid models kick in. Primary is unchanged (free/gpt-oss-120b).- Provider routing safety note. BlockRun's
NVIDIA_MODEL_MAPinsrc/lib/ai-providers.ts:2094-2111does NOT have explicit entries for the 3 new models, butcallOpenAICompatiblefalls through to the bare model name (modelMap[k] || k), so ClawRouter sendingnvidia/deepseek-v4-proreaches NVIDIA NIM as baredeepseek-v4-pro— which is what NIM expects. Documented in the BLOCKRUN_MODELS comment block insrc/models.ts. If BlockRun later adds explicit map entries with different upstream names, this side needs no change. - Net free-model count: 8 → 10 (8 originals + 3 added - 1 phased out). README badge, tagline, "Quick Start" sections, and SKILL.md description all updated to reflect "10 free NVIDIA models". Pricing table in README adds three new rows in benchmark order.
- Test fixtures.
src/router/strategy.test.tsMODEL_PRICINGmap gains entries for the 3 new free models. No assertion changes anywhere else — gpt-oss-120b stays the asserted default insrc/exclude-models.test.ts,src/models.test.ts,test/fallback.ts, andtest/integration/exclude-models.test.ts.
v0.12.170 — Apr 29, 2026
- Bare
kimi/moonshotaliases now resolve to Kimi K2.6. BlockRun hid Kimi K2.5 from its public model UI on 2026-04-28 (commitbfbdedf) and now features K2.6 as the Moonshot flagship. ClawRouter's local alias map followed the old direction and still pointedkimiandmoonshotat K2.5, which created a quiet drift from the source-of-truth registry: agents asking for "kimi" got the previous-gen model while BlockRun's homepage advertised K2.6. The aliases now resolve tomoonshot/kimi-k2.6and a new barekimi-k2alias is added for the same target. Users who explicitly pinnedkimi-k2.5continue to get K2.5 — the explicit pin is preserved as a cost-stability opt-in ($0.60/$3.00 vs K2.6's $0.95/$4.00). NVIDIA-hosted K2.5 (retired 2026-04-21) still redirects tomoonshot/kimi-k2.5. - Routing tier primaries deliberately unchanged.
autoTiers.MEDIUMandagenticTiers.MEDIUMcontinue to anchor onmoonshot/kimi-k2.5. Promoting them to K2.6 would silently raise per-call cost +58% on input / +33% on output for every default user — that's a separate decision tracked outside this release, ideally with measured retention/IQ data on K2.6 vs K2.5.premiumTiers.SIMPLEwas alreadymoonshot/kimi-k2.6and is unchanged. Net effect: behavior shift is opt-in via thekimialias /kimi-k2shorthand, not forced through default routing. - Doc and test fixture refresh. README's profile-overview table now shows
kimi-k2.6in the PREMIUM column (matchingdocs/routing-profiles.mdandsrc/router/config.ts:1134).src/router/strategy.test.tsgains a K2.6 pricing fixture so cost-calc tests stay honest if K2.6 ever appears in test scenarios.src/proxy.models-endpoint.test.tsnow asserts bothkimi-k2.6andmoonshot/kimi-k2.6are discoverable through the/modelsendpoint.test/fallback.ts's "Unknown model" example list leads withmoonshot/kimi-k2.6.
v0.12.169
v0.12.169 — Apr 28, 2026
- Synthesize structured
tool_callsfrom XML/text formats some models emit incontent. Earlier tool-call hardening (v0.12.165, v0.12.166) handled the case where upstream returned a structuredtool_callsarray (or signaledfinish_reason: "tool_calls") and the model also leaked planning prose intocontent. This release closes a third gap where upstream returns no structured tool calls at all and the model's actual tool invocations live as XML/text insidecontent— typical when a downstream client (OpenClaw is the visible offender) prompt-engineers tool instructions instead of sending a structuredtools[]schema, so the model dutifully honors the prompt format and emits the call as text. Two formats observed in the wild are now recognized and converted to OpenAI-shapedtool_calls:- OpenClaw-style —
<tool_call>NAME<arg_key>K1</arg_key><arg_value>V1</arg_value>...<arg_key>Kn</arg_key><arg_value>Vn</arg_value></tool_call>. Requires at least onearg_key/arg_valuepair so prose like<tool_call>name</tool_call>in documentation does not mis-fire. Surfaced via a real ClawRouter→OpenClaw session where the agent emitted six identical<tool_call>web_search<arg_key>...</arg_key>...blocks in 60 seconds, none executed, then hallucinated "I need a Brave API key" as the failure explanation. - Anthropic-style —
<function_calls><invoke name="NAME"><parameter name="K">V</parameter>...</invoke></function_calls>. Reproduction confirmed Moonshot Kimi K2.6 emits this format when given prompt-engineered tool instructions without a structuredtools[]schema. - Values are best-effort coerced via
JSON.parseso<arg_value>5</arg_value>becomes5(number) and<arg_value>true</arg_value>becomestrue(boolean); strings that don't parse stay as strings. Synthesized IDs are OpenAI-shaped (call_<base64url>). - Wired into both response paths: the SSE conversion path (
src/proxy.ts:5081+) and the non-streaming JSON path (src/proxy.ts:5325+). When extraction succeeds,contentis blanked,message.tool_callsis populated, andfinish_reasonflips to"tool_calls"— matching exactly the shape downstream tool executors already handle from the v0.12.165/166 paths. - New module
src/textual-tool-calls.tsplussrc/textual-tool-calls.test.ts(13 unit tests) and four new integration tests insrc/proxy.tool-forwarding.test.tscovering OpenClaw format / non-streaming, OpenClaw format / SSE, Anthropic format / non-streaming, and a negative test (plain prose passes through unchanged withfinish_reason: "stop").
- OpenClaw-style —
/modelpicker allowlist now lives insrc/top-models.json(single source of truth, loaded bysrc/top-models.ts). PreviouslyinjectModelsConfig()insrc/index.tscarried a literal array that drifted from the install scripts'TOP_MODELS(which carry their own copies inscripts/reinstall.sh+scripts/update.sh). The JSON file is the version anyone actually edits going forward; both runtime (src/index.ts) and the test suite (src/top-models.test.ts) read from it. The install scripts still carry their own embedded copies because they run before npm dependencies are resolved — but now there's one canonical list to copy from when adding a new model.- Alias adds.
br-sonnet→anthropic/claude-sonnet-4.6(matching the existingbr-partner shorthand pattern), andgpt5now resolves toopenai/gpt-5.5instead ofopenai/gpt-5.4(following v0.12.167's GPT-5.5 promotion as BlockRun's newest visible flagship).
v0.12.168
v0.12.168 — Apr 25, 2026
-
Propagate
openai/gpt-5.5everywhere it should appear. v0.12.167 added the model toBLOCKRUN_MODELS, thegpt-5.5alias, and the install-scriptTOP_MODELSallowlist — but every other place ClawRouter advertises a flagship still pointed atgpt-5.4. This release closes the gap so 5.5 is a first-class citizen across routing, the picker, marketing, and the OpenClaw skill page.-
src/router/config.ts— three fallback-chain insertions, no primary changes.openai/gpt-5.5slots in immediately beforeopenai/gpt-5.4inauto.COMPLEX.fallback,premiumTiers.COMPLEX.fallback, andagenticTiers.COMPLEX.fallback. Both stay reachable; 5.5 gets preference when the chain reaches OpenAI. Comments updated so 5.5 is "newest flagship — 1M+ ctx, native agent + computer use" and 5.4 is "previous flagship — benchmarked at 6,213ms, IQ 57". Tier primaries are unchanged: promoting 5.5 to a primary slot needs measured latency/IQ data, which we don't have yet — that's a separate decision tracked outside this release. -
src/index.ts—/modelpicker allowlist updated.src/index.tscarries its own copy ofTOP_MODELS(separate from the install scripts' identical-but-distinct list — both populate the OpenClaw allowlist depending on install path). Addedopenai/gpt-5.5andanthropic/claude-opus-4.5(also missed in v0.12.167'sBLOCKRUN_MODELSadd for opus-4.5), and replaced the now-deprecatedminimax/minimax-m2.5withminimax/minimax-m2.7so the picker matches the deprecation we landed yesterday. -
README.md— Premium Models pricing table. Added theopenai/gpt-5.5row at $5.00/$30.00 per 1M tokens (~$0.0175 per 0.5K-in-0.5K-out request), 1M context, full feature set. Placed betweenclaude-opus-4.6($0.0150) ando1($0.0375) so the table stays sorted by approximate $ /request. -
skills/clawrouter/SKILL.md— model list line. The "55+ models including..." line now leadsgpt-5.5, gpt-5.4, ...and includesclaude-opus-4.5alongside 4.7/4.6.
-
-
Files deliberately not touched:
docs/smart-llm-router-14-dimension-classifier.mdanddocs/llm-router-benchmark-46-models-sub-1ms-routing.mdare frozen benchmark archives — adding 5.5 to a benchmark table without measured numbers would falsify the document. Theposts/*.mdmarketing content is similarly point-in-time. Those will be refreshed if/when 5.5 gets benchmarked.
v0.12.167
v0.12.167 — Apr 24, 2026
- Realign the model registry to BlockRun source-of-truth. Audit found three drifts where ClawRouter's
BLOCKRUN_MODELStable didn't match whatblockrun/src/lib/models.tsactually exposes. The server is the source of truth for which models exist and what they cost; the proxy's local view should mirror that 1:1 so cost estimation, the/modelpicker, and routing tier selection all see the same world the server does.- Add
openai/gpt-5.5. BlockRun's newest visible OpenAI flagship — first fully retrained base since GPT-4.5, 1M+ context, 128K output, native agent + computer use. Pricing $5/$30 per 1M tokens. Added toBLOCKRUN_MODELS, thegpt-5.5alias, and theTOP_MODELSallowlist in both install scripts. Routing tiers insrc/router/config.tscontinue to anchor ongpt-5.4because that's what's benchmarked; users can pin5.5explicitly. Routing change is a separate decision. - Add
anthropic/claude-opus-4.5as a distinct model. Previously ClawRouter'sMODEL_ALIASESsilently rewroteanthropic/claude-opus-4.5to4.7, making 4.5 unreachable through ClawRouter even though BlockRun lists it as a separate visible model with its own pricing and 200K context (vs 4.6/4.7's 1M). Removed the alias, added 4.5 toBLOCKRUN_MODELSwith its real 200K/32K shape, and added ananthropic/claude-opus-4-5(dashed) alias for the slug variant. Test insrc/models.test.tswas codifying the old upgrade-to-4.7 behavior — flipped to assert the pin is preserved end-to-end. - Mark
minimax/minimax-m2.5deprecated → fallbackminimax/minimax-m2.7. BlockRun retired m2.5 entirely (only m2.7 is in theirMODELStable). ClawRouter still listed both; m2.5 now flips todeprecated: truewith the m2.7 fallback so existing pins keep working. scripts/reinstall.sh+scripts/update.sh: dropminimax/minimax-m2.5from theTOP_MODELSpicker allowlist (still reachable, just hidden from the picker) and addopenai/gpt-5.5+anthropic/claude-opus-4.5.
- Add
v0.12.166
v0.12.166 — Apr 24, 2026
- Tool-call planning prose suppressed even when
finish_reasonis the only signal (thanks @0xCheetah1, #162). Follow-up to v0.12.165's #161 fix. Live Telegram/OpenClaw testing caught one more shape the planning-prose leak could wriggle through: some upstreams (Moonshot Kimi K2.6 again) mark a turn withfinish_reason: "tool_calls"without exposingmessage.tool_calls/delta.tool_callsat the same inspection point. The #161 gate (toolCalls.length > 0) saw no array and let the prose through. The gate is nowendsWithToolCalls || toolCalls.length > 0— applied consistently across the non-streaming JSON path and the SSE emission path, plus the finish-reason override in the SSE terminal chunk. Two new regression tests insrc/proxy.tool-forwarding.test.ts— one per response shape — lock the behavior in: a response withfinish_reason: "tool_calls"and no tool_calls array has itscontentblanked and thetool_callsfinish_reason preserved. User-visible impact: fewer "I should look up X before replying" preambles sneaking into agent chat surfaces for turns that are supposed to be pure tool invocations.
v0.12.165
v0.12.165 — Apr 24, 2026
- Tool-call planning prose no longer leaks to chat surfaces (thanks @0xCheetah1, #161). Some OpenAI-compatible providers — Moonshot's Kimi K2.6 was the visible offender through OpenClaw Telegram — return
{ content: "The user wants the current time. I should call get_current_time with Chicago.", tool_calls: [...] }. Tool execution only needstool_calls; thecontentfield is internal planning that the upstream should have hidden behind a<think>tag but didn't. ClawRouter now suppressescontentwhenevertool_calls.length > 0, in both the non-streaming JSON response path and the SSE-conversion path that clients like OpenClaw hit withstream: true. Tool execution is unaffected; only the user-visible planning prose goes away. Covered by two regression tests insrc/proxy.tool-forwarding.test.ts(one per response shape). - Plugin restart loop killed.
injectModelsConfig()insrc/index.tswrites ClawRouter-owned keys into~/.openclaw/openclaw.jsonon every plugin load. OpenClaw's config watcher has a catch-all rule — any change with no matching plugin-declared prefix triggers a full gateway restart — somcp.servers.blockrunwrites kept ping-ponging the gateway. The plugin definition now exposesreload: { noopPrefixes: ["mcp.servers.blockrun"] }(new optional field onOpenClawPluginDefinition) to tell OpenClaw's loader that ClawRouter self-manages that prefix. Silently ignored on OpenClaw runtimes that predate thereloadfield. - Dedup + response cache now isolate streaming and non-streaming callers. Discovered while adding the SSE regression test for the tool-call fix: a
stream: truerequest that followed an identical-bodystream: falserequest was gettingcontent-type: application/jsoninstead oftext/event-stream. Two compounding bugs. ClawRouter rewritesparsed.stream = falsebefore the upstream call (BlockRun API doesn't support streaming), and bothRequestDeduplicator.hash(body)andResponseCache.generateKey(body)ran AFTER that rewrite — so astream:trueandstream:falserequest hashed identically. Worse,response-cache.ts'snormalizeForCacheexplicitly strippedstreamfrom the key with the comment "we handle streaming separately" (it never did). Fix: (1) prefix bothdedupKeyandcacheKeyinsrc/proxy.tswith the originalisStreamingintent ("sse:"vs"json:"), so the two shapes never share a cache slot; (2) stop strippingstreaminnormalizeForCache. Latent bug — real-world impact was small because the exact scenario (identical body, different stream flag, within 30s/10min TTL) is rare in practice — but a correctness bug nonetheless. Regression test added (isolates dedup cache between streaming and non-streaming requests with identical bodies); the existingresponse-cache.test.tsexpectation was inverted (it was codifying the broken behavior).
v0.12.164
v0.12.164 — Apr 23, 2026
-
Video generation switched to async submit + poll (tracks BlockRun server commit 654cd35). The server-side
/v1/videos/generationsendpoint no longer blocks for the full 60–180s upstream generation — POST now returns202 { id, poll_url }in ~3–20s, and a separate GET on thepoll_url(same x-payment header) returns202while the job is queued/in_progress and200with the final video on completion. Server settles only on the first completed poll, so upstream failure or caller disconnect = zero USDC charged. ClawRouter's proxy handler insrc/proxy.tsnow collapses this back into a single blocking POST for the client: submit upstream, poll thepoll_urlevery 5s (initial 3s grace) up to a 5-min deadline, then backup + serve locally as before. Legacy sync-shaped server responses still work — the handler checks forpoll_urlbefore switching to the poll loop. Client-side timeouts bumped:buildVideoGenerationProvider.timeoutMs200s → 330s;/videogenslash 200s → 330s; both sit above the 5-min internal poll deadline so the lastdata[0].urlfinishes streaming back. User-facing impact: same blocking POST as before, but Cloudflare's 100s edge timeout no longer kills long-running Seedance 2.0 jobs. -
Image/video plumbing parity — four exposure surfaces now match the backend. The BlockRun server has supported 8 image models (DALL-E 3, GPT Image 1, Nano Banana / Pro, Flux 1.1 Pro, Grok Imagine / Pro, CogView-4) and 4 video models (Grok Imagine, Seedance 1.5 Pro / 2.0 Fast / 2.0) since v0.12.162, but the ClawRouter client exposed them inconsistently:
buildImageGenerationProviderinsrc/index.tsonly advertised 4 image models. OpenClaw's native image picker couldn't see Flux, Grok Imagine (×2), or CogView-4 — the only way to hit them was raw curl with an explicitmodelfield. Themodelsarray now lists all 8; defaultModel switched fromopenai/gpt-image-1togoogle/nano-banana(cheapest general-purpose default);capabilities.geometry.sizesadds CogView-4's 512x512, 768x768, 768x1344, 1344x768, 1440x1440 sizes;capabilities.edit.enabledflipped totrueso OpenClaw's edit UI surfaces gpt-image-1's/v1/images/image2imagepath.MODEL_ALIASESinsrc/models.tshad zero image/video shortcuts. All 140+ aliases were LLM chat models. Added 17 new aliases soresolveModelAlias("dalle")→openai/dall-e-3,"flux"→black-forest/flux-1.1-pro,"seedance"→bytedance/seedance-1.5-pro, plusbanana,banana-pro,nano-banana-pro,gpt-image,flux-pro,grok-imagine/-pro,grok-video,cogview,seedance-1.5,seedance-2,seedance-2-fast./imagegenand/videogenslash commands now actually exist. README documented/imagegen a dog dancing on the beachas if it worked, but no such command was ever registered — it was silent drift from the aspirational README. Both commands now register viaapi.registerCommand, accept--model=<alias>,--size=WxH,--n=<int>,--duration=<5|8|10>flags (parsed by a sharedparseGenArgshelper), resolve aliases throughresolveModelAlias, POST to the proxy's/v1/images/generationsand/v1/videos/generationsendpoints, and return inline markdown () or video URLs. 402 responses surface as "top up with/wallet" hints; video timeout is 200s to cover upstream polling./img2imgremains README-only for now — will land in a follow-up.- Partner framework now includes image/video as LLM-callable tools. Added three new
PartnerServiceDefinitionentries insrc/partners/registry.ts—image_generation,image_edit,video_generation— so the existingbuildPartnerTools→api.registerToolpipeline surfaces them asblockrun_image_generation,blockrun_image_edit,blockrun_video_generationtools. Agents can now tool-call image/video from chat without the skill layer guessing at raw HTTP shapes.
-
Dropped the Twitter/X user-lookup partner. We no longer run X data as a product surface. Removed
x_users_lookupfromPARTNER_SERVICES, deleted theskills/x-api/skill directory, and strippedx|from the/v1/(?:x|partner|pm|...)/paid-route regex insrc/proxy.ts(so/v1/x/*no longer short-circuits to the partner proxy — it now falls through to the usual chat-completion path or 404s cleanly). Server-side/v1/x/*endpoints are still live atblockrun.ai/apifor any existing integrations; only the client wiring is retired. -
/partners+clawrouter partnersCLI output compressed ~4×. Previously 6 lines per service (name, full agent-facing description, tool name, method, pricing block, blank) × 17 services ≈ 100 lines of wall-of-text, which is what @vicky was calling out as "读不了" (unreadable).PartnerServiceDefinitiongained two fields —category("Prediction markets" / "Market data" / "Image & Video") andshortDescription(≤ 40 chars) — driving a new grouped, column-aligned one-liner per tool. The longdescriptionfield stays intact for the LLM-facing JSON Schema (agents still see "Call this ONLY when..." guidance). Output is now ~25 lines, one screen.
v0.12.163
v0.12.163 — Apr 23, 2026
- README leads with the free tier. Post-v0.12.160 the product story changed — 8 NVIDIA models free forever, no wallet required to start — but the README still opened "fund your wallet" as step 2 of Quick Start and buried the free tier in a single line at the bottom. Rewrites so the free tier is the hook, not a footnote: hero tagline adds "8 models free, no crypto required. No signup. No API key. No credit card." plus a 🆓 shields.io badge; the "Why ClawRouter exists" list opens with "Starts at $0"; the comparison-vs-others table adds a "Free tier" row showing ClawRouter's "8 models, no signup" against OpenRouter's rate limits and LiteLLM/Martian/Portkey's "no"; Quick Start gets a "No wallet? 8 models work free out of the box" callout and reframes step 2 as optional; routing-profiles table adds
/model freeat 100% savings; the Costs section lists the current 8 free model IDs by name (was a stale 11-model list referencing the retired Nemotron Ultra / Mistral Large / Devstral). This release is README-only — code is identical to v0.12.162 — version bump exists so the updated marketing reaches the npmjs.com package page and the clawhub marketplace listing.
v0.12.162
v0.12.162 — Apr 23, 2026
- ByteDance Seedance video models wired into the client. BlockRun server has exposed three Seedance models since late April —
bytedance/seedance-1.5-pro($0.03/sec),bytedance/seedance-2.0-fast($0.15/sec, ~60–80s gen time), andbytedance/seedance-2.0Pro ($0.30/sec) — all 720p, text-to-video + image-to-video, 5s default and up to 10s. The/v1/videos/generationsproxy passthrough insrc/proxy.tsalready forwarded anymodelvalue untouched, so actual USDC charges were always correct (server dictates the amount in its 402 response andpayment-preauth.tscaches the server-sentPaymentRequired, not a local estimate — charges never depended on ClawRouter's local pricing table). Three client-side gaps were fixed anyway:- Usage telemetry was wrong for Seedance.
estimateVideoCostinsrc/proxy.tsonly knewxai/grok-imagine-video, so every Seedance request logged$0.42/cliptologUsageregardless of what the user was actually billed — skewing/usageoutput, savings %, and journal cost fields.VIDEO_PRICINGnow carries all four models at real server rates. - OpenClaw's native video UI only saw one model.
buildVideoGenerationProviderinsrc/index.tsadvertisedmodels: ["xai/grok-imagine-video"], so users of the UI picker couldn't pick Seedance at all; the only path was raw curl with an explicitmodelfield. Themodelsarray now lists all four, and provider capabilities widen tomaxDurationSeconds: 10/supportedDurationSeconds: [5, 8, 10]to cover both vendors' ranges (server still validates per-modelmaxDurationSeconds, so invalid combos return a clean 400). - README docs only mentioned Grok. Video-generation section now lists all four models in the table, swaps the curl example to
bytedance/seedance-2.0-fast(sweet-spot price/quality), and makes the upstream-polling note vendor-neutral instead of xAI-specific.
- Usage telemetry was wrong for Seedance.
- Docs: fixed proxy port in free-models guide. Thanks to @Bortlesboat (#160) for catching
4402→8402typos indocs/11-free-ai-models-zero-cost-blockrun.md. The rest of the repo,src/config.ts(DEFAULT_PORT = 8402), and all other docs have always said 8402; that one guide was sending new users at the wrong local port.
v0.12.159
- Market data tools — BlockRun gateway now exposes realtime and historical market data; ClawRouter wires them into OpenClaw as 6 first-class agent tools so the model stops scraping finance sites. Paid ($0.001 via x402, same wallet as LLM calls):
blockrun_stock_priceandblockrun_stock_historyacross 12 global equity markets (US, HK, JP, KR, UK, DE, FR, NL, IE, LU, CN, CA). Free (no x402 charge):blockrun_stock_list(ticker lookup / company-name search),blockrun_crypto_price(BTC-USD, ETH-USD, SOL-USD, …),blockrun_fx_price(EUR-USD, GBP-USD, JPY-USD, …),blockrun_commodity_price(XAU-USD gold, XAG-USD silver, XPT-USD platinum). Tool schemas advertise market codes, session hints (pre/post/on), and bar resolutions (1/5/15/60/240/D/W/M). Path routing extended: the partner-proxy whitelist insrc/proxy.tsnow matches/v1/(?:x|partner|pm|exa|modal|stocks|usstock|crypto|fx|commodity)/, routing all new paths throughproxyPaidApiRequest(payFetch handles 402 when present, passes through 200 for free categories). Tool definitions added insrc/partners/registry.ts;skills/clawrouter/SKILL.mdgains a "Built-in Agent Tools" section listing market data + X intelligence + Polymarket alongside the LLM router.