brianletort.ai

Industry

The AI compute stack, read weekly.

What moved this week in AI compute, what it cost, and what it means. Across software, hardware, and networking — with graded sources, falsifiable predictions, and a public track record.

Built for senior decision-makers who need to separate the trend from the headline: boards governing AI capital allocation, investors weighing where the marginal dollar earns its return, AI architects deciding what to procure and what to wait on. Each issue is the weekly read. The companion LLM Evolutionary Tree is a living view of model lineage that the weekly brief annotates as the field branches.

Latest issue

Issue 01/Week 17 of 2026/

The marginal dollar at every layer earns less. Except the network.

Capital is still flooding the AI stack, but the marginal dollar earns less than the prior dollar at every layer except one. Hyperscalers are deploying ~$700B of 2026 capex against ~$120B of AI-attributable revenue — a ratio that gets tested when Microsoft, Google, Meta, and Amazon all print Q1 between April 29 and May 6. Frontier labs are committing forward compute at multiples of their disclosed revenue. The on-prem floor moved up sharply this week with DeepSeek V4 (open weights, frontier-class on coding), reshaping enterprise procurement: the question is no longer whether to self-host frontier capability, but which workload to move and on what fabric. The durable winner of the buildout is the network and interconnect layer that connects cloud, neocloud, and on-prem for hybrid AI — the only layer where pricing power compounds rather than compresses.

Sibling publication

The Model Pulse/Issue 01/April 2026 Recap

April rewrote the floor: open weights crossed the closed frontier, and the safety-gated frontier became a procurement criterion.

April 2026 was the most productive month in frontier model history. Eleven model rows landed in the LLM Evolutionary Tree across eight vendors, including the first open-weights MoE to score frontier-class on SWE-Bench Pro coding (Kimi K2.6 at 58.6 and GLM-5.1 at 58.4, both above GPT-5.4 at 57.7 and Claude Opus 4.6 at 57.3). DeepSeek V4 shipped in two configurations under MIT license with 1M-token context and 1.6T total parameters, moving on-prem coding from 'can we?' to 'which workload first?' Anthropic withheld its Mythos flagship on cyber-capability grounds after UK AISI confirmed autonomous offensive capability — an inflection that turns capability gating from a research-org concern into a procurement diligence requirement. The closed frontier responded: GPT-5.5 (Apr 23) and Claude Opus 4.7 (Apr 16) reset the closed-source ceiling, with adaptive-thinking and ultra-long-context as the new defaults. Read together, April was the month the canopy widened on three axes at once: open-vs-closed parity on coding, reasoning-as-default at every tier, and capability gating as a market signal.

The deep read on the software side of the AI stack — lineage, architecture, benchmarks, vendor signals — anchored to the LLM Evolutionary Tree. The AI Stack Weekly above covers the cross-stack flywheel; The Model Pulse drills the model layer.

How we read it

Three lenses. One flywheel. A working filter for signal vs noise.

Cheaper inference pulls in more workloads. More workloads need more compute. More compute needs denser fabric. Denser fabric unlocks new architectures, which lower the cost of inference again. Read a week’s news across that loop and the noise sorts itself out: a chip launch is real if it changes power per rack; a model release is real if it changes which workload runs on-prem; a fabric standard is real if it shortens hybrid deployment time.

Every claim is graded 1–5 on source quality. Every prediction is falsifiable, time-bounded, and scored hit or miss in future issues. The framework the publication uses to filter is published separately and revised when evidence demands.

Read the working framework
JevonsMetcalfeGilderSoftwareJevonsHardwareHuangNetworkingMetcalfe + Gilder

Companion view

The LLM Evolutionary Tree.

A living lineage of frontier and open-weight models. Updated as new families branch and converge. The weekly brief points to the tree when a release reshapes the canopy; the tree points back when context is needed.

Who reads it — and what they get.

  • Boards and audit committees. The week’s capex direction relative to disclosed AI revenue, with the levers that signal a re-rating before earnings do.
  • Investors and analysts. Where the marginal dollar is earning least and most across frontier labs, hyperscalers, neoclouds, and on-prem — and the falsifiable predictions that test the read.
  • CIOs, CTOs, and AI architects. What changed in procurement this week — open vs closed models, merchant vs custom silicon, scale-up vs hybrid fabric — and what to deploy versus what to wait on.
  • Operators of AI estates. The catalysts in the next 7–14 days that change unit economics: earnings prints, supply ramps, regulatory milestones, and power-market clearing.

Operate. Publish. Teach.