Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Tuesday, 7 April 2026, and the AI industry is having one of those days where you genuinely need to read every story. From compute infrastructure deals worth billions to open-source models making proprietary rivals sweat, here is today’s full briefing.
Anthropic has announced a new agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU capacity, expected to come online starting in 2027. The announcement came alongside a notable business milestone: Anthropic’s annualised revenue run-rate has now crossed $30 billion, up from around $9 billion at the end of 2025. The number of enterprise customers spending over $1 million annually has doubled in less than two months, now exceeding 1,000.
This is a major compute bet. The vast majority of the new capacity will be based in the United States, deepening Anthropic’s November 2025 commitment to invest $50 billion in American AI infrastructure. Anthropic already runs Claude across AWS Trainium, Google TPUs, and NVIDIA GPUs, a deliberate hardware diversity strategy that gives it more resilience than rivals locked into a single vendor.
On X, the reaction was substantial: 14,000 likes on the announcement, with most of the commentary focusing on the sheer scale of the compute commitment. Developer circles are mostly reading the TPU diversification as a smart hedge against vendor lock-in. A minority view argues the US infrastructure framing feels political. Both takes have something to them. At $30B run-rate with enterprise customers doubling every two months, this is less about narrative and more about not getting caught short on compute when demand spikes again.
Google DeepMind has released Gemma 4, a new family of open-weight models built on Gemini 3 research. The numbers are hard to ignore. The flagship Gemma 4 31B Thinking model scores 89.2% on AIME 2026 mathematics, 84.3% on GPQA Diamond scientific knowledge, and 86.4% on the tau2-bench retail agentic tasks. The 26B Mixture-of-Experts variant activates only around 3-4 billion parameters at inference, meaning you get near-frontier intelligence without the compute bill to match.
The models support 140 languages, native multimodal reasoning, agentic function-calling workflows, and fine-tuning on your own hardware. They are available under Apache 2.0, meaning commercial use is unrestricted.
r/LocalLLaMA has been buzzing since launch. The top sentiment: “mindblowingly good if configured right,” and that qualifier matters. The main community concern is that peak performance requires real setup time. Developers who’ve put that work in are reporting results that comfortably compete with models several times the parameter count. One recurring take: “Google just made the open-source vs. closed argument obsolete.”
That might be an overstatement, but it captures the mood. The 26B A3B MoE variant in particular is drawing serious production interest given its inference efficiency on consumer-grade hardware.

OpenAI’s strategy chief Jason Kwon sent a letter to the California and Delaware attorneys general on Monday, asking them to investigate what he described as “improper and anti-competitive behavior” by Elon Musk. According to the letter, Musk has been coordinating with Meta CEO Mark Zuckerberg in efforts designed to undermine OpenAI’s work toward AGI. Jury selection for the Musk-vs-OpenAI civil trial begins April 27.
The specific allegation — that Musk worked with Zuckerberg — is a new dimension on a story that has mostly been framed as a two-party dispute. OpenAI’s framing is that these attacks are meant to transfer control of frontier AI development away from mission-driven organisations toward competitors who “spurn any responsibility for safety.”
The reaction on X and Hacker News is broadly skeptical. Most people reading this see it as pre-trial PR positioning, not a genuine regulatory escalation. The irony most often cited: an $852 billion-valued company invoking mission-driven nonprofit principles in a letter to the AGs. That said, the Zuckerberg-coordination allegation is genuinely new and could become significant at trial. April 27 is worth watching.

According to an Axios scoop, Meta plans to open source versions of its next AI models. Alexandr Wang is leading that effort internally. The report notes Meta is also in talks to rent or buy additional TPU capacity, suggesting the open-source push comes with genuine compute backing rather than being an afterthought.
The timing isn’t accidental. With Google’s Gemma 4 just landing under Apache 2.0, Meta’s announcement puts both companies on the same side of a widening open-vs-closed divide, while directly competing with each other for developer adoption. Developers are already framing this as a two-front open-weight battle.
The reaction is positive, but carefully hedged. Meta’s Llama track record is genuinely strong, so the credibility is there. The words “eventually” and implied caveats around safety reviews are raising familiar questions about timeline. The Alexandr Wang connection is also adding a government-contract dimension that some readers are flagging: Wang has spent significant time at the intersection of AI and national security, and some see his role here as meaningful strategic signalling.
Microsoft announced three new models now available through Microsoft Foundry: MAI-Transcribe-1 (speech-to-text), MAI-Voice-1 (text-to-speech), and MAI-Image-2 (image generation). The pricing is clearly designed to compete: MAI-Transcribe-1 starts at $0.36 per hour and beats Whisper-large-v3 in 11 of 25 top global languages, with better results than Gemini 3.1 Flash on 11 of the remaining 14.
MAI-Voice-1 starts at $22 per million characters. MAI-Image-2 comes in at $5 per million text input tokens and $33 per million image output tokens. Developers can try the models now through Microsoft Foundry or the MAI Playground (US only).
Hacker News engagement was low at launch: 4 points, no comments at time of briefing. But the developer community’s actual assessment is more considered. The bigger story here is another step toward Microsoft building its own AI stack, independent of OpenAI. MAI-Transcribe-1’s benchmark lead over Whisper in major languages is the technical detail getting the most attention. For builders already in the Azure ecosystem, these models are worth evaluating seriously.
Today’s stories line up around a theme. Anthropic is securing gigawatts of proprietary compute. Google and Meta are racing to put frontier-capable models into the hands of any developer with a decent GPU. Microsoft is building its own model stack to reduce dependency on OpenAI. And OpenAI is fighting a legal battle that may redefine what “mission-driven AI” means under US law.
For builders — whether you’re just getting started with Claude Code or running production workloads — the practical read is this: access to near-frontier AI has never been cheaper or less restricted. Gemma 4 on consumer hardware is genuinely competitive with models that cost orders of magnitude more to run via API. The question now is which tasks still require the proprietary frontier, and whether the proprietary labs can maintain that gap as open weights keep closing it.
Anthropic has signed an agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU compute capacity, expected to come online in 2027. This is the company’s largest compute commitment to date, supporting its $30 billion annualised revenue run-rate and rapid enterprise customer growth.
Gemma 4 31B scores 89.2% on AIME 2026 math benchmarks, 84.3% on GPQA Diamond scientific reasoning, and tops most open-model leaderboards. The 26B MoE variant is particularly notable for its inference efficiency. As of release, it ranked 1st or 2nd among open-weight models on the Arena AI leaderboard. Performance does depend on proper configuration.
Elon Musk sued OpenAI in 2024, alleging he was deceived when the company transitioned toward a for-profit structure. OpenAI has countered with claims of anti-competitive behaviour. The latest development is a letter to California and Delaware attorneys general asking them to investigate Musk’s alleged coordination with Mark Zuckerberg to undermine OpenAI. Jury selection begins April 27, 2026.
Microsoft’s MAI models are first-party AI models developed independently of OpenAI. The three announced today are MAI-Transcribe-1 (speech recognition), MAI-Voice-1 (voice synthesis), and MAI-Image-2 (image generation). They are available through Microsoft Foundry with competitive per-unit pricing and are already being integrated into Microsoft’s own products.
For more on the stories shaping AI right now, visit FridayAIClub.com. We publish daily briefings every morning.