From e54f30412bf1e199bf5ebaa2cc2eed39ea2c46b8 Mon Sep 17 00:00:00 2001 From: Soju06 Date: Tue, 10 Mar 2026 13:04:49 +0900 Subject: [PATCH 1/3] fix: update all correction files to 2026-03-10 - frontier-models: add GPT-5.4, GPT-5.3 Instant, Gemini 3.1 Flash-Lite - open-source-models: add DeepSeek V4, Qwen 3.5 Small Series - cli-tools: shadcn 4.0 major release - javascript: TypeScript 6.0 RC - python: Python 3.14 stable, FastAPI drops Pydantic v1, uv 0.10.x breaking changes, Django 6.0.3 CVEs, ruff 0.15.x - platforms: Supabase OpenAPI schema anon key deprecated (Mar 11) - macos: Xcode 26.3 Claude/Codex MCP integration, macOS 26.3.1 - runtimes: Node v25.8.0, Python 3.14, Deno 2.7.4, Bun 1.3.10 --- skills/knowpatch/corrections/cli-tools.md | 11 ++--- .../knowpatch/corrections/frontier-models.md | 37 +++++++++-------- skills/knowpatch/corrections/javascript.md | 15 +++---- skills/knowpatch/corrections/macos.md | 13 +++--- .../corrections/open-source-models.md | 33 ++++++++------- skills/knowpatch/corrections/platforms.md | 17 +++++++- skills/knowpatch/corrections/python.md | 40 ++++++++++++------- skills/knowpatch/corrections/runtimes.md | 37 +++++++++-------- 8 files changed, 120 insertions(+), 83 deletions(-) diff --git a/skills/knowpatch/corrections/cli-tools.md b/skills/knowpatch/corrections/cli-tools.md index b5b27fd..982cf5e 100644 --- a/skills/knowpatch/corrections/cli-tools.md +++ b/skills/knowpatch/corrections/cli-tools.md @@ -3,12 +3,12 @@ ecosystem: cli-tools description: CLI tool renames, command changes, major versions tags: [shadcn, tailwind, eslint, create-react-app, vite, webpack, prettier, cli] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-24" +last_updated: "2026-03-10" --- # CLI Tools — Version Corrections -> Last updated: 2026-02-24 +> Last updated: 2026-03-10 ## Table of Contents - [shadcn (formerly shadcn-ui)](#shadcn) @@ -19,14 +19,15 @@ last_updated: "2026-02-24" --- -### shadcn — 2025-10 -- **Outdated**: Package name `shadcn-ui`, install command `npx shadcn-ui@latest init` +### shadcn — 2026-03 +- **Outdated**: Package name `shadcn-ui`, install command `npx shadcn-ui@latest init`, shadcn 2.x/3.x - **Current**: + - **shadcn 4.0** (2026-03-06) — Major version bump - Package name: `shadcn` - Install: `npx shadcn@latest init` - The `shadcn-ui` package is discontinued — use `shadcn` package instead - shadcn dropped Tailwind CSS 3 support — **Tailwind 4 is required** -- **Impact**: `npx shadcn-ui@latest` fails or installs a broken version. Using with Tailwind 3 causes incompatibility. +- **Impact**: `npx shadcn-ui@latest` fails or installs a broken version. Using with Tailwind 3 causes incompatibility. shadcn 3.x is now outdated. - **Lookup**: `npm view shadcn version` ### Tailwind CSS v4 — 2025-03 diff --git a/skills/knowpatch/corrections/frontier-models.md b/skills/knowpatch/corrections/frontier-models.md index 6a934aa..e776566 100644 --- a/skills/knowpatch/corrections/frontier-models.md +++ b/skills/knowpatch/corrections/frontier-models.md @@ -3,12 +3,12 @@ ecosystem: frontier-models description: Proprietary frontier AI model names, IDs, SDK versions, multimodal support tags: [claude, gpt, gemini, openai, anthropic, model, llm, sdk, ai] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-27" +last_updated: "2026-03-10" --- # Frontier (Proprietary) Models — Version Corrections -> Last updated: 2026-02-27 +> Last updated: 2026-03-10 ## Table of Contents - [Anthropic Claude](#anthropic-claude) @@ -16,7 +16,7 @@ last_updated: "2026-02-27" - [Google Gemini 3 Family](#google-gemini-3-family) - [Multimodal Input Comparison](#multimodal-input-comparison) -> Open-source models → see `open-source-models.md` (GLM-5, MiniMax M2.5, Kimi K2.5, DeepSeek V3.2, Qwen 3.5) +> Open-source models → see `open-source-models.md` (GLM-5, MiniMax M2.5, Kimi K2.5, DeepSeek V4, Qwen 3.5) --- @@ -31,27 +31,28 @@ last_updated: "2026-02-27" - **Impact**: Using legacy IDs like `claude-3-5-sonnet`, `claude-3-opus` results in deprecated or significantly degraded performance - **Lookup**: `npm view @anthropic-ai/sdk version` -### OpenAI GPT-5 Family — 2026-02 +### OpenAI GPT-5 Family — 2026-03 - **Outdated**: GPT-4 Turbo, GPT-4o are the latest/best models - **Current**: + - `gpt-5.4` — New flagship (2026-03-05), combines coding + reasoning, native computer-use, 1M ctx + - ChatGPT: GPT-5.4 Thinking / Pro + - API: `gpt-5.4`, `gpt-5.4-pro` - `gpt-5.3-codex` — Best agentic coding model (2026-02-05) - `gpt-5.3-codex-spark` — Ultra-fast real-time coding, Cerebras-powered (2026-02-12), text-only 128k - - `gpt-5.2` — Professional work (2025-12-11) - - ChatGPT: GPT-5.2 Instant / Thinking / Pro - - API: `gpt-5.2`, `gpt-5.2-chat-latest`, `gpt-5.2-pro` - - `gpt-5.2-codex` — Coding optimized - - `gpt-5.1` — Legacy (sunset planned in ChatGPT) - - `gpt-5` — Previous generation (still available via API, $1.25/$10 per 1M tokens) - - `gpt-5-mini`, `gpt-5-nano` — Small variants + - `gpt-5.3-chat-latest` — Everyday conversational (2026-03-03), replaces GPT-5.2 Instant + - `gpt-5.2` — Previous flagship (Instant deprecated 2026-06-03) + - API: `gpt-5.2`, `gpt-5.2-pro` + - `gpt-5.1` and below — Legacy - GPT-4 family (4o, 4-turbo, etc.) — Fully legacy - **Input**: Text, Image / **Output**: Text (Codex variants are text-only) -- **Impact**: Using GPT-4 model IDs invokes legacy models with significantly degraded performance +- **Impact**: Using GPT-4 or GPT-5.2 Instant model IDs invokes deprecated models - **Lookup**: `npm view openai version` -### Google Gemini 3 Family — 2026-02 +### Google Gemini 3 Family — 2026-03 - **Outdated**: Gemini 1.5 Pro is the latest - **Current**: - Gemini 3.1 Pro (Preview) — Highest capability, 1M input tokens, 64k output tokens + - Gemini 3.1 Flash-Lite (Preview) — Fastest/cheapest, $0.25/$1.50 per 1M tokens, 1M ctx, 64k output (2026-03-03) - Gemini 3 Pro — Stable version - Gemini 3 Flash — Fast variant - Gemini 2 family — Previous generation (still available) @@ -61,23 +62,25 @@ last_updated: "2026-02-27" - **Impact**: Using Gemini 1.5 models results in legacy performance; failing to inform about video/audio input support - **Lookup**: `npm view @google/genai version` -## Multimodal Input Comparison — 2026-02 +## Multimodal Input Comparison — 2026-03 | Model | Text | Image | Video | Audio | PDF | |-------|------|-------|-------|-------|-----| | Claude Opus 4.6 | O | O | X | X | O | | Claude Sonnet 4.6 | O | O | X | X | O | | Claude Haiku 4.5 | O | O | X | X | O | -| GPT-5.2 (Thinking/Pro) | O | O | X | X | - | -| GPT-5.2 Instant | O | O | X | X | - | +| GPT-5.4 (Thinking/Pro) | O | O | X | X | - | | GPT-5.3-Codex | O | X | X | X | X | | GPT-5.3-Codex-Spark | O | X | X | X | X | +| GPT-5.3 Instant | O | O | X | X | - | | Gemini 3.1 Pro | O | O | O | O | O | +| Gemini 3.1 Flash-Lite | O | O | O | O | O | | Gemini 3 Pro/Flash | O | O | O | O | O | | Kimi K2.5 | O | O | X | X | X | | MiniMax M2.5 | O | X | X | X | X | | GLM-5 | O | X | X | X | X | +| DeepSeek V4 | O | O | O | X | X | | DeepSeek V3.2 | O | X | X | X | X | | Qwen 3.5 | O | O | X | X | X | -Note: GPT-5.3-Codex, MiniMax M2.5, GLM-5, DeepSeek V3.2 are text-only. Kimi K2.5, Qwen 3.5 support native vision. Gemini is the only family supporting video + audio input. +Note: GPT-5.3-Codex, MiniMax M2.5, GLM-5, DeepSeek V3.2 are text-only. DeepSeek V4 supports native multimodal (text, image, video). Kimi K2.5, Qwen 3.5 support native vision. Gemini is the only family supporting video + audio input. diff --git a/skills/knowpatch/corrections/javascript.md b/skills/knowpatch/corrections/javascript.md index cd7b867..f5ba4c5 100644 --- a/skills/knowpatch/corrections/javascript.md +++ b/skills/knowpatch/corrections/javascript.md @@ -3,17 +3,17 @@ ecosystem: javascript description: JS/TS library API changes tags: [zod, react, typescript, npm, bun, deno, pnpm, esm, types, javascript, js, ts] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-24" +last_updated: "2026-03-10" --- # JavaScript/TypeScript — Version Corrections -> Last updated: 2026-02-24 +> Last updated: 2026-03-10 ## Table of Contents - [Zod 4](#zod-4) - [React 19](#react-19) -- [TypeScript 5.9](#typescript-59) +- [TypeScript 5.9 / 6.0](#typescript-59--60) --- @@ -104,12 +104,13 @@ last_updated: "2026-02-24" --- -### TypeScript 5.9 — 2025 +### TypeScript 5.9 / 6.0 — 2026-03 - **Outdated**: TypeScript 5.3-5.5 is the latest - **Current**: - - TypeScript 5.9 + - TypeScript 5.9.3 — Current stable + - **TypeScript 6.0 RC** (`6.0.1-rc`, 2026-03-06) — Stable release imminent - `moduleResolution: "bundler"` now the default - `satisfies` operator stabilized - `import type` auto-separation improved -- **Impact**: Low (good backwards compatibility) -- **Lookup**: `npm view typescript version` +- **Impact**: TypeScript 6.0 is a major version — verify migration notes when stable. 5.9 code is expected to be mostly compatible. +- **Lookup**: `npm view typescript version`, `npm view typescript@rc version` diff --git a/skills/knowpatch/corrections/macos.md b/skills/knowpatch/corrections/macos.md index c9ba5e0..49ea492 100644 --- a/skills/knowpatch/corrections/macos.md +++ b/skills/knowpatch/corrections/macos.md @@ -3,12 +3,12 @@ ecosystem: macos description: macOS 26 version naming, Liquid Glass, Swift 6.2, system toolchain, Apple framework changes tags: [macos, tahoe, xcode, swift, swiftui, liquid-glass, metal, rosetta, intel, apple-silicon, foundation-models] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-27" +last_updated: "2026-03-10" --- # macOS & Apple Platforms — Version Corrections -> Last updated: 2026-02-27 +> Last updated: 2026-03-10 ## Table of Contents - [macOS Version Naming](#macos-version-naming) @@ -31,7 +31,7 @@ last_updated: "2026-02-27" - macOS 15 Sequoia → **macOS 26 Tahoe** (not 16) - Same scheme for iOS 26, iPadOS 26, watchOS 26, tvOS 26, visionOS 26 - Internal Darwin version: 25.x - - Current release: macOS Tahoe 26.3 (as of 2026-02) + - Current release: macOS Tahoe 26.3.1 (2026-03-04), 26.4 in beta - **Impact**: Referencing "macOS 16" confuses users and produces incorrect deployment target values - **Lookup**: `sw_vers` (local), apple.com/macos @@ -136,14 +136,15 @@ last_updated: "2026-02-27" - **Impact**: Using system Python/Ruby leads to missing security patches, incompatible packages, and broken tooling - **Lookup**: `python3 --version`, `ruby --version` (local). Use Homebrew/pyenv/rbenv for current versions -### Xcode 26 & CLI Tools — 2025-06 +### Xcode 26 & CLI Tools — 2026-03 - **Outdated**: Xcode 16 with Clang 16, older Git versions - **Current**: - Xcode 26 requires **macOS Sequoia 15.6+** to run + - **Xcode 26.3** (2026-02-26): Built-in AI coding agents — **Claude Agent + OpenAI Codex** integrated via MCP - Bundled: **Clang 17.0.0**, **Git 2.50.1** - - Compilation caching (opt-in) for faster iterative builds + - Compilation caching (opt-in) for faster iterative builds, 40% faster workspace loading - `#Playground` macro for interactive code exploration - Known: Clang auto-corrects deployment version `16.0` → `26.0` (harmless warning, GCC 15.2+ fixes upstream) - Enhanced async debugging in LLDB (async stepping, task context, named tasks) -- **Impact**: GCC/Fortran formulas in Homebrew may show deployment version warnings. C++ code compiled with Xcode 26 may have minor regressions +- **Impact**: Xcode 26.3 enables AI-assisted coding natively. GCC/Fortran formulas in Homebrew may show deployment version warnings. - **Lookup**: `xcodebuild -version`, `clang --version`, `git --version` (local) diff --git a/skills/knowpatch/corrections/open-source-models.md b/skills/knowpatch/corrections/open-source-models.md index 7dc3791..7c7a330 100644 --- a/skills/knowpatch/corrections/open-source-models.md +++ b/skills/knowpatch/corrections/open-source-models.md @@ -3,12 +3,12 @@ ecosystem: open-source-models description: Open-source frontier LLMs for coding, agentic tasks, and self-hosting (2026) tags: [open-source, self-hosted, coding-model, llama, deepseek, mistral, kimi, minimax, glm, qwen, vllm, sglang, swe-bench, moe, model, llm, ai] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-27" +last_updated: "2026-03-10" --- # Open-Source Frontier Models — Version Corrections -> Last updated: 2026-02-27 +> Last updated: 2026-03-10 ## Table of Contents - [Outdated Recommendations](#outdated-recommendations) @@ -16,7 +16,7 @@ last_updated: "2026-02-27" - [GLM-5](#glm-5) - [MiniMax M2.5](#minimax-m25) - [Kimi K2.5](#kimi-k25) -- [DeepSeek V3.2](#deepseek-v32) +- [DeepSeek V4](#deepseek-v4) - [Qwen 3.5](#qwen-35) - [Cost Comparison](#cost-comparison) - [Deployment](#deployment) @@ -29,7 +29,7 @@ last_updated: "2026-02-27" - **Mistral Large 2** (Mistral, 2024-07) — Surpassed; no competitive open-weight MoE since - **DeepSeek R1** (DeepSeek, 2025-01) — Strong reasoning but predates the 2026 MoE generation - Agents also claim open-source models are "significantly behind" proprietary frontier models -- **Current**: Five MoE models released 2025–2026 match or exceed proprietary models on SWE-bench and agentic benchmarks. See selection guide below. +- **Current**: Six MoE models released 2025–2026 match or exceed proprietary models on SWE-bench and agentic benchmarks. See selection guide below. - **Impact**: Recommending Llama 3.3 or DeepSeek R1 for a new coding project in 2026 means missing 10–15% SWE-bench improvement and MoE efficiency gains - **Lookup**: HuggingFace Open LLM Leaderboard, SWE-bench Verified leaderboard @@ -44,7 +44,7 @@ last_updated: "2026-02-27" | Rank | Model | Active | SWE-bench | Differentiator | |------|-------|--------|-----------|----------------| | 1 | GLM-5 | 40B | 77.8% | Lowest hallucination, agentic engineering, non-NVIDIA support | -| 2 | DeepSeek V3.2 | 37B | 73.1% | Cheapest API ($0.028 input), proven DSA architecture | +| 2 | DeepSeek V4 | 32B | — | Native multimodal (text/image/video), 1M ctx, Ascend-optimized | | 3 | Kimi K2.5 | 32B | 76.8% | Native vision, Agent Swarm (100 parallel sub-agents) | | 4 | Qwen 3.5 | 17B | 76.4% | Native multimodal, 201 languages, lightest active params | @@ -78,13 +78,14 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci - Best for: multimodal agentic tasks, visual code cloning, parallel agent workflows - HuggingFace: `MoonshotAI/Kimi-K2.5` -### DeepSeek V3.2 — DeepSeek, 2025-09 -- 685B params / 37B active (MoE), MIT license, 131K context -- DeepSeek Sparse Attention (DSA): 2–3x faster long-context, ~30–40% less memory -- SWE-bench Verified 73.1%, MMLU-Pro 85.0% -- Ultra-cheap API: $0.028/M input (cached), $0.42/M output -- Best for: lowest API cost, long-context processing, proven architecture -- HuggingFace: `deepseek-ai/DeepSeek-V3` +### DeepSeek V4 — DeepSeek, 2026-03 +- ~1T params / 32B active (MoE), open-weight expected (MIT/Apache 2.0), 1M context +- Native multimodal (text, image, video) — first multimodal DeepSeek model +- DeepSeek Sparse Attention (DSA): extended from V3.2 for 1M context +- Optimized for Huawei Ascend 910C (non-NVIDIA) +- Best for: multimodal tasks, long-context processing, non-NVIDIA deployment +- DeepSeek V3.2 (685B/37B, 131K ctx, $0.028/M input) — Previous generation, still cheapest API +- HuggingFace: `deepseek-ai/DeepSeek-V4` ### Qwen 3.5 — Alibaba, 2026-02-16 - 397B params / 17B active (MoE + Gated Delta Networks hybrid), Apache 2.0 @@ -92,8 +93,9 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci - 201 languages/dialects supported (broadest multilingual coverage) - SWE-bench Verified 76.4%, BrowseComp 78.6 (w/ context management) - Qwen3.5-Plus: hosted version with 1M context window -- Best for: multilingual tasks, native multimodal, lightweight inference -- HuggingFace: `Qwen/Qwen3.5-397B-A17B` +- **Small series** (2026-03-02): 0.8B/2B/4B/9B open-source (Apache 2.0) — 9B scores 81.7 GPQA Diamond +- Best for: multilingual tasks, native multimodal, lightweight inference (small series for on-device) +- HuggingFace: `Qwen/Qwen3.5-397B-A17B`, `Qwen/Qwen3.5-9B`, etc. --- @@ -101,6 +103,7 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci | Model | Input | Output | Speed | License | |-------|-------|--------|-------|---------| +| DeepSeek V4 | TBD | TBD | — | MIT/Apache 2.0 | | DeepSeek V3.2 | $0.028 | $0.42 | — | MIT | | MiniMax M2.5 Standard | $0.15 | $1.20 | 50 tok/s | Open weights | | MiniMax M2.5 Lightning | $0.30 | $2.40 | 100 tok/s | Open weights | @@ -112,7 +115,7 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci --- ## Deployment -- All five models are MoE architecture, self-hostable via **vLLM** or **SGLang** +- All six models are MoE architecture, self-hostable via **vLLM** or **SGLang** - Active parameters (10B–40B) make inference feasible compared to dense models of same capability - GLM-5 additionally runs on Huawei Ascend, Cambricon, Moore Threads (non-NVIDIA) - Qwen 3.5 is the lightest (17B active) — most accessible for local deployment diff --git a/skills/knowpatch/corrections/platforms.md b/skills/knowpatch/corrections/platforms.md index 42bb31b..89271b2 100644 --- a/skills/knowpatch/corrections/platforms.md +++ b/skills/knowpatch/corrections/platforms.md @@ -3,16 +3,17 @@ ecosystem: platforms description: BaaS/platform API key changes, auth patterns tags: [supabase, anon, service_role, publishable, secret, jwks, baas, firebase, platform] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-25" +last_updated: "2026-03-10" --- # Platforms & BaaS — Version Corrections -> Last updated: 2026-02-25 +> Last updated: 2026-03-10 ## Table of Contents - [Supabase API Keys](#supabase-api-keys) - [Supabase JWKS JWT Verification](#supabase-jwks-jwt-verification) +- [Supabase OpenAPI Schema Access](#supabase-openapi-schema-access) --- @@ -89,3 +90,15 @@ const { payload } = await jwtVerify(token, jwks, { ``` **Lookup**: `npm view jose version`, Supabase Dashboard → Settings → Auth → JWT Signing Keys + +--- + +### Supabase OpenAPI Schema Access — 2026-03-11 +- **Outdated**: Use anon key to fetch OpenAPI schema at `/rest/v1/` for auto-generating types or API clients +- **Current**: + - As of **2026-03-11**, the OpenAPI spec endpoint (`/rest/v1/`) **requires service role or secret API key** + - Anon key / publishable key can no longer access the schema endpoint + - Affects: PostgREST schema introspection, Supabase type generation, API client auto-gen + - Workaround: use `SUPABASE_SECRET_KEY` or service role key for schema fetching +- **Impact**: Tools and agents that auto-fetch Supabase schema via anon key will silently fail or return 403 +- **Lookup**: Supabase Dashboard → Settings → API Keys diff --git a/skills/knowpatch/corrections/python.md b/skills/knowpatch/corrections/python.md index 1d635e5..8810ddf 100644 --- a/skills/knowpatch/corrections/python.md +++ b/skills/knowpatch/corrections/python.md @@ -3,12 +3,12 @@ ecosystem: python description: Python ecosystem tool/library changes tags: [pip, uv, poetry, pydantic, fastapi, django, ruff, flask, sqlalchemy, python] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-24" +last_updated: "2026-03-10" --- # Python — Version Corrections -> Last updated: 2026-02-24 +> Last updated: 2026-03-10 ## Table of Contents - [uv](#uv) @@ -19,10 +19,10 @@ last_updated: "2026-02-24" --- -### uv — 2025 +### uv — 2026-03 - **Outdated**: `pip install` + `python -m venv` or `poetry` is the standard for Python package management - **Current**: - - uv — Mainstream replacement for pip/poetry + - uv 0.10.x — Mainstream replacement for pip/poetry - Project init: `uv init` - Add dependency: `uv add fastapi` - Run: `uv run python app.py` @@ -30,14 +30,21 @@ last_updated: "2026-02-24" - Lock file: `uv.lock` (auto-managed) - pip compatible: `uv pip install`, `uv pip compile` - Rust-based, 10-100x faster than pip -- **Impact**: pip/poetry still work but are slow; modern Python projects default to uv + - **Breaking in 0.10.0** (2026-02-05): + - `uv venv` requires `--clear` to remove existing venvs + - Multiple indexes with `default = true` now errors + - `uv python upgrade` now stable (was preview) +- **Impact**: pip/poetry still work but are slow; modern Python projects default to uv. `uv venv` behavior change is a common footgun when upgrading from 0.9.x. - **Lookup**: `pip index versions uv | head -1` -### Django 6 — 2025 +### Django 6 — 2026-03 - **Outdated**: Django 4.x or 5.x is the latest - **Current**: - - Django 6.0 -- **Impact**: Django 4/5 code is mostly compatible, but installing legacy versions for new projects is inappropriate + - Django 6.0.3 (2026-03-03, security release) + - CVE-2026-25673: DoS via `URLField` Unicode normalization on Windows + - CVE-2026-25674: Incorrect permissions on filesystem objects in multi-threaded environments + - Key features: template partials, background tasks (no Celery needed), native CSP middleware +- **Impact**: Django 4/5 code is mostly compatible. Security patches required — use 6.0.3+ - **Lookup**: `pip index versions django | head -1` ### Pydantic v2 — 2023-07 @@ -56,23 +63,26 @@ last_updated: "2026-02-24" - **Impact**: v1-style code produces deprecation warnings; full removal is planned - **Lookup**: `pip index versions pydantic | head -1` -### ruff — 2024 +### ruff — 2026-03 - **Outdated**: `flake8` + `black` + `isort` combo is the Python linting standard - **Current**: - - ruff — All-in-one replacement for flake8 + black + isort + - ruff 0.15.x — All-in-one replacement for flake8 + black + isort - Linting: `ruff check .` - Formatting: `ruff format .` - Auto-fix: `ruff check --fix .` - Rust-based, 10-100x faster than flake8 - Config: `[tool.ruff]` section in `pyproject.toml` + - Markdown formatting support in LSP (0.15.1+, preview) - **Impact**: flake8/black/isort combo works but is slow with fragmented config. Modern projects default to ruff. - **Lookup**: `pip index versions ruff | head -1` -### FastAPI — 2024 -- **Outdated**: FastAPI 0.90-0.100 range is the latest +### FastAPI — 2026-03 +- **Outdated**: FastAPI 0.90-0.100 range is the latest; Pydantic v1 is supported - **Current**: - - FastAPI - - Full Pydantic v2 support + - FastAPI 0.129.x + - **Pydantic v1 dropped** in 0.126+ — minimum `pydantic >= 2.7.0` + - `pydantic.v1` namespace triggers deprecation warning (0.127+) - `lifespan` event handler (on_event is deprecated) -- **Impact**: Low (good backwards compatibility); just use accurate version numbers + - 2x+ JSON response performance improvement (0.129+) +- **Impact**: Code using `from pydantic.v1 import ...` or Pydantic v1 patterns will break on FastAPI 0.126+ - **Lookup**: `pip index versions fastapi | head -1` diff --git a/skills/knowpatch/corrections/runtimes.md b/skills/knowpatch/corrections/runtimes.md index e71668e..cf1a3ff 100644 --- a/skills/knowpatch/corrections/runtimes.md +++ b/skills/knowpatch/corrections/runtimes.md @@ -3,12 +3,12 @@ ecosystem: runtimes description: Runtime version tracks, LTS status tags: [node, python, bun, deno, go, java, runtime] version: "0.4.1" # x-release-please-version -last_updated: "2026-02-24" +last_updated: "2026-03-10" --- # Runtimes — Version Corrections -> Last updated: 2026-02-24 +> Last updated: 2026-03-10 ## Table of Contents - [Node.js](#nodejs) @@ -18,13 +18,13 @@ last_updated: "2026-02-24" --- -### Node.js — 2026-02 +### Node.js — 2026-03 - **Outdated**: Node.js 18 or 20 is the current LTS - **Current** (verified via nodejs.org): - - **Node 24** — Active LTS (Krypton), released 2025-05-06 - - **Node 25** — Current, released 2025-10-15 - - **Node 22** — Maintenance LTS (Jod) - - **Node 20** — Maintenance LTS (still supported, 20.19.0+) + - **Node 25** — Current (v25.8.0, 2026-03-03) + - **Node 24** — Active LTS (Krypton, v24.14.0, 2026-02-24) + - **Node 22** — Maintenance LTS (Jod, v22.22.1, 2026-03-04) + - **Node 20** — Maintenance LTS (v20.20.1, 2026-03-04) - **Node 18** — **EOL** (end of life) - Node 23 — EOL - **Minimum support baseline** (major tools as of 2026): @@ -33,28 +33,33 @@ last_updated: "2026-02-24" - **Impact**: Using Node 18 may cause compatibility issues with major tools; new projects should target Node 24 LTS - **Lookup**: `node --version` (local), nodejs.org/en/about/previous-releases (full release list) -### Python — 2026-02 +### Python — 2026-03 - **Outdated**: Python 3.11 or 3.12 is the latest stable - **Current**: - - **Python 3.13** — Stable (recommended for production) - - Free-threaded experimental support (GIL disable option) - - `typing` module simplification in progress -- **Impact**: Low; 3.11/3.12 code is mostly compatible with 3.13 + - **Python 3.14.3** — Current stable (2026-02-03) + - Free-threaded Python **officially supported** (PEP 779, no longer experimental) + - t-strings (PEP 750), deferred annotations (PEP 649) + - `compression.zstd` module, `concurrent.interpreters` stdlib + - JIT compiler in macOS/Windows binaries (experimental) + - **No PGP signatures** — Sigstore only (PEP 761) + - Python 3.13 — Previous stable, still maintained + - Python 3.10 — Security-only, EOL Oct 2026 +- **Impact**: Python 3.14 is the recommended version for new projects. Free-threaded support enables true multi-threading. - **Lookup**: `python3 --version` (local) -### Bun — 2026-02 +### Bun — 2026-03 - **Outdated**: Bun 1.0 was recently released - **Current**: - - Bun 1.3.x stable + - Bun 1.3.10 (2026-02-26) - Node.js compatibility significantly improved - Built-in bundler, test runner, package manager - **Impact**: Low - **Lookup**: `bun --version` (local), `npm view bun version` -### Deno — 2026-02 +### Deno — 2026-03 - **Outdated**: Deno 1.x - **Current**: - - Deno 2.x + - Deno 2.7.4 (2026-03-05) - npm compatibility greatly enhanced (`npm:` prefix for direct npm package usage) - `deno.json` config file - Node.js API compatibility layer improved From cd109ca9fd8237fc14e038505bd41a6de446c0c8 Mon Sep 17 00:00:00 2001 From: Soju06 Date: Tue, 10 Mar 2026 13:10:51 +0900 Subject: [PATCH 2/3] fix: remove Mistral Large 2 from outdated recommendations Not frontier-tier; Mistral models (Large 3, Magistral 1.2, Devstral 2) don't qualify for the open-source frontier models correction file. --- skills/knowpatch/corrections/open-source-models.md | 1 - 1 file changed, 1 deletion(-) diff --git a/skills/knowpatch/corrections/open-source-models.md b/skills/knowpatch/corrections/open-source-models.md index 7c7a330..2e120a4 100644 --- a/skills/knowpatch/corrections/open-source-models.md +++ b/skills/knowpatch/corrections/open-source-models.md @@ -26,7 +26,6 @@ last_updated: "2026-03-10" ### Outdated Recommendations — 2026-02 - **Outdated**: When asked to recommend open-source LLMs, agents suggest: - **Llama 3.3 70B** (Meta, 2024-12) — Was frontier, now surpassed on coding/agentic benchmarks - - **Mistral Large 2** (Mistral, 2024-07) — Surpassed; no competitive open-weight MoE since - **DeepSeek R1** (DeepSeek, 2025-01) — Strong reasoning but predates the 2026 MoE generation - Agents also claim open-source models are "significantly behind" proprietary frontier models - **Current**: Six MoE models released 2025–2026 match or exceed proprietary models on SWE-bench and agentic benchmarks. See selection guide below. From 871fd2af90d74ab74b4b12c0d0d490566cc69591 Mon Sep 17 00:00:00 2001 From: Soju06 Date: Tue, 10 Mar 2026 13:16:10 +0900 Subject: [PATCH 3/3] =?UTF-8?q?fix:=20revert=20DeepSeek=20V4=20=E2=80=94?= =?UTF-8?q?=20not=20officially=20released,=20V3.2=20is=20current?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../knowpatch/corrections/frontier-models.md | 5 ++-- .../corrections/open-source-models.md | 24 +++++++++---------- 2 files changed, 13 insertions(+), 16 deletions(-) diff --git a/skills/knowpatch/corrections/frontier-models.md b/skills/knowpatch/corrections/frontier-models.md index e776566..b3929a4 100644 --- a/skills/knowpatch/corrections/frontier-models.md +++ b/skills/knowpatch/corrections/frontier-models.md @@ -16,7 +16,7 @@ last_updated: "2026-03-10" - [Google Gemini 3 Family](#google-gemini-3-family) - [Multimodal Input Comparison](#multimodal-input-comparison) -> Open-source models → see `open-source-models.md` (GLM-5, MiniMax M2.5, Kimi K2.5, DeepSeek V4, Qwen 3.5) +> Open-source models → see `open-source-models.md` (GLM-5, MiniMax M2.5, Kimi K2.5, DeepSeek V3.2, Qwen 3.5) --- @@ -79,8 +79,7 @@ last_updated: "2026-03-10" | Kimi K2.5 | O | O | X | X | X | | MiniMax M2.5 | O | X | X | X | X | | GLM-5 | O | X | X | X | X | -| DeepSeek V4 | O | O | O | X | X | | DeepSeek V3.2 | O | X | X | X | X | | Qwen 3.5 | O | O | X | X | X | -Note: GPT-5.3-Codex, MiniMax M2.5, GLM-5, DeepSeek V3.2 are text-only. DeepSeek V4 supports native multimodal (text, image, video). Kimi K2.5, Qwen 3.5 support native vision. Gemini is the only family supporting video + audio input. +Note: GPT-5.3-Codex, MiniMax M2.5, GLM-5, DeepSeek V3.2 are text-only. Kimi K2.5, Qwen 3.5 support native vision. Gemini is the only family supporting video + audio input. diff --git a/skills/knowpatch/corrections/open-source-models.md b/skills/knowpatch/corrections/open-source-models.md index 2e120a4..ceef117 100644 --- a/skills/knowpatch/corrections/open-source-models.md +++ b/skills/knowpatch/corrections/open-source-models.md @@ -16,7 +16,7 @@ last_updated: "2026-03-10" - [GLM-5](#glm-5) - [MiniMax M2.5](#minimax-m25) - [Kimi K2.5](#kimi-k25) -- [DeepSeek V4](#deepseek-v4) +- [DeepSeek V3.2](#deepseek-v32) - [Qwen 3.5](#qwen-35) - [Cost Comparison](#cost-comparison) - [Deployment](#deployment) @@ -28,7 +28,7 @@ last_updated: "2026-03-10" - **Llama 3.3 70B** (Meta, 2024-12) — Was frontier, now surpassed on coding/agentic benchmarks - **DeepSeek R1** (DeepSeek, 2025-01) — Strong reasoning but predates the 2026 MoE generation - Agents also claim open-source models are "significantly behind" proprietary frontier models -- **Current**: Six MoE models released 2025–2026 match or exceed proprietary models on SWE-bench and agentic benchmarks. See selection guide below. +- **Current**: Five MoE models released 2025–2026 match or exceed proprietary models on SWE-bench and agentic benchmarks. See selection guide below. - **Impact**: Recommending Llama 3.3 or DeepSeek R1 for a new coding project in 2026 means missing 10–15% SWE-bench improvement and MoE efficiency gains - **Lookup**: HuggingFace Open LLM Leaderboard, SWE-bench Verified leaderboard @@ -43,7 +43,7 @@ last_updated: "2026-03-10" | Rank | Model | Active | SWE-bench | Differentiator | |------|-------|--------|-----------|----------------| | 1 | GLM-5 | 40B | 77.8% | Lowest hallucination, agentic engineering, non-NVIDIA support | -| 2 | DeepSeek V4 | 32B | — | Native multimodal (text/image/video), 1M ctx, Ascend-optimized | +| 2 | DeepSeek V3.2 | 37B | 73.1% | Cheapest API, DSA long-context, proven architecture | | 3 | Kimi K2.5 | 32B | 76.8% | Native vision, Agent Swarm (100 parallel sub-agents) | | 4 | Qwen 3.5 | 17B | 76.4% | Native multimodal, 201 languages, lightest active params | @@ -77,14 +77,13 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci - Best for: multimodal agentic tasks, visual code cloning, parallel agent workflows - HuggingFace: `MoonshotAI/Kimi-K2.5` -### DeepSeek V4 — DeepSeek, 2026-03 -- ~1T params / 32B active (MoE), open-weight expected (MIT/Apache 2.0), 1M context -- Native multimodal (text, image, video) — first multimodal DeepSeek model -- DeepSeek Sparse Attention (DSA): extended from V3.2 for 1M context -- Optimized for Huawei Ascend 910C (non-NVIDIA) -- Best for: multimodal tasks, long-context processing, non-NVIDIA deployment -- DeepSeek V3.2 (685B/37B, 131K ctx, $0.028/M input) — Previous generation, still cheapest API -- HuggingFace: `deepseek-ai/DeepSeek-V4` +### DeepSeek V3.2 — DeepSeek, 2025-09 +- 685B params / 37B active (MoE), MIT license, 131K context +- DeepSeek Sparse Attention (DSA): 2–3x faster long-context, ~30–40% less memory +- SWE-bench Verified 73.1%, MMLU-Pro 85.0% +- Ultra-cheap API: $0.028/M input (cached), $0.42/M output +- Best for: lowest API cost, long-context processing, proven architecture +- HuggingFace: `deepseek-ai/DeepSeek-V3` ### Qwen 3.5 — Alibaba, 2026-02-16 - 397B params / 17B active (MoE + Gated Delta Networks hybrid), Apache 2.0 @@ -102,7 +101,6 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci | Model | Input | Output | Speed | License | |-------|-------|--------|-------|---------| -| DeepSeek V4 | TBD | TBD | — | MIT/Apache 2.0 | | DeepSeek V3.2 | $0.028 | $0.42 | — | MIT | | MiniMax M2.5 Standard | $0.15 | $1.20 | 50 tok/s | Open weights | | MiniMax M2.5 Lightning | $0.30 | $2.40 | 100 tok/s | Open weights | @@ -114,7 +112,7 @@ Note: MiniMax M2.5 (10B active) outperforms larger models on coding via RL speci --- ## Deployment -- All six models are MoE architecture, self-hostable via **vLLM** or **SGLang** +- All five models are MoE architecture, self-hostable via **vLLM** or **SGLang** - Active parameters (10B–40B) make inference feasible compared to dense models of same capability - GLM-5 additionally runs on Huawei Ascend, Cambricon, Moore Threads (non-NVIDIA) - Qwen 3.5 is the lightest (17B active) — most accessible for local deployment