Software.
DeepSeek V4 released — MIT-licensed, 1.6T MoE, 1M context, frontier-class on coding
HuggingFace, deepseek.com
Open-weight models lead closed on SWE-Bench Pro — Kimi K2.6 58.6 and GLM-5.1 58.4 vs GPT-5.4 57.7 and Opus 4.6 57.3
Artificial Analysis, HuggingFace model cards
Anthropic withholds 'Mythos' flagship on cyber-capability grounds; UK AISI confirms autonomous offensive capability
anthropic.com, red.anthropic.com, aisi.gov.uk
What this means
Open weights crossing the closed frontier on coding moves the on-prem question from 'can we?' to 'which workload first?' Procurement should pull DeepSeek V4 and the leading open coding models into pilot this quarter; merchant-vs-self-host is now a real fork, not a hypothetical. Anthropic withholding Mythos on cyber-risk is the new diligence signal — vendor-risk frameworks need a capability-gate criterion, not just an availability SLA.