Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Saturday, 28 March 2026, and it’s already been a week. Today’s daily digest covers what is probably the most Anthropic-heavy news cycle in years: a leaked frontier model, a federal court victory, and the quiet widening of a gap between what AI can do and what the rules say it should.

It started with a sloppy data store. Anthropic left close to 3,000 unpublished assets in a publicly accessible data cache, including what appears to be a draft blog post announcing a new model called Claude Mythos. Fortune found it, a security researcher from LayerX and a Cambridge PhD student confirmed it, and Anthropic scrambled to lock down the data lake once they were notified.
The leaked draft introduces a new tier called Capybara, sitting above Opus in capability and price. The post describes it as getting “dramatically higher scores” on benchmarks compared to Claude Opus 4.6, with particular jumps in coding, reasoning, and cybersecurity tasks. The same draft warned that the model poses “unprecedented cybersecurity risks.” That last phrase is the one that stuck.
Anthropic confirmed the model exists and is being tested with early access customers. A spokesperson called it “the most capable we’ve built to date” and described the exposure as a “human error” in their content management system configuration.
On X, the story pulled over 10,000 engagements in a day. Reactions split pretty cleanly: genuine awe at the capability jump on one side, alarm at both the security risk framing and the data leak itself on the other. Cybersecurity stocks took a hit. $PANW, $CRWD, and $FTNT each dropped between 4 and 6 percent. A vocal minority questioned whether the leak was truly accidental, given the timing around a planned invite-only CEO summit in Europe that was also found in the same cache. I’m not in the conspiracy camp, but the question isn’t absurd.
What’s clear: if the leaked draft is accurate, Mythos/Capybara represents a meaningful capability threshold rather than an incremental update. That’s significant in a market that’s been accused lately of running out of meaningful gains.

While the Mythos leak was dominating tech X, a separate Anthropic story was quietly setting a bigger legal precedent. U.S. District Judge Rita Lin issued a temporary order blocking the Pentagon from designating Anthropic as a supply chain risk, and blocking President Trump’s directive ordering all federal agencies to stop using Claude.
The backstory: negotiations between Anthropic and the Defense Department broke down after Anthropic tried to prevent its AI from being deployed in fully autonomous weapons or for domestic surveillance. The Pentagon, specifically Defense Secretary Pete Hegseth, responded by invoking a rare military authority typically reserved for foreign adversaries, branding Anthropic a potential “adversary and saboteur.”
Judge Lin wasn’t impressed. She wrote: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” She described the measures as appearing “designed to punish Anthropic” rather than protect operational security, and said they could “cripple” the company if allowed to stand.
The ruling is temporary, delayed a week to take effect, and doesn’t force the Pentagon to use Anthropic’s products. A separate case in the D.C. federal appeals court is still pending.
Reddit and Hacker News both ran hot on this one. The dominant read was that it sets a real precedent: AI companies can’t be punished through military procurement law for exercising First Amendment rights. One widely upvoted HN comment pointed out the uncomfortable timing: OpenAI signed its Pentagon deal in the hours after Anthropic was formally sanctioned. Whether that’s coincidence or coordination is an open question, but it’s exactly the kind of thing that makes people distrust the politics of AI procurement.

Bloomberg’s Mark Gurman, who has been right about Apple hardware and software calls for over a decade, reports that iOS 27 will allow users to set any third-party AI service as their default Siri backend. That ends the current ChatGPT exclusivity arrangement and opens Apple’s platform to services like Claude, Gemini, and whatever else clears the review process.
The report pulled 7,000+ likes and 422 quote tweets on X. The responses break into two camps: people who are genuinely excited about finally getting capable AI in Siri, and people who expect Apple to gate-keep this through App Store review in ways that effectively maintain the status quo. Hacker News focused less on the commercial angle and more on what third-party AI integration means for on-device privacy, a concern Apple has built a lot of brand equity around that could become complicated if your voice queries are being routed through OpenAI or Anthropic’s servers.
If the report is accurate, this is a meaningful shift. Apple built its AI strategy around controlling the full stack. Opening the default AI layer to competitors is either a concession to competitive pressure or a strategic reframe, positioning iOS as an AI-neutral platform rather than trying to win the model race directly.
SoftBank has signed a $40 billion loan to fund its investment in OpenAI, according to Bloomberg. That would make it the largest AI financing deal on record, and it’s worth pausing on the number: forty billion dollars, borrowed, in a single bet on one company.
SoftBank already carries heavy leverage from its Vision Fund era, the period that produced Uber, WeWork, and a long list of overvalued bets that went sideways. The financial community on X was split: bulls see this as high-conviction positioning on what they believe is the defining platform of the decade; bears see it as SoftBank doing SoftBank things, with debt risk the market tends to underweight until it can’t.
Forty billion borrowed dollars tells you something about where SoftBank thinks OpenAI sits in the market. You don’t borrow at that scale on a company you think might lose. SoftBank is making a clear call that OpenAI remains the dominant player in enterprise AI for at least the next five years. Whether that call ages well depends heavily on whether Anthropic’s Mythos actually delivers on what the leak suggests.

Google Research published TurboQuant this week, a vector quantization algorithm that achieves a 6x reduction in key-value cache memory usage with zero accuracy loss. It’s being presented at ICLR 2026, alongside two companion techniques: Quantized Johnson-Lindenstrauss (QJL) and PolarQuant.
The technical problem it solves: standard vector quantization reduces model size but introduces “memory overhead,” meaning you have to store quantization constants that partially eat back the savings. TurboQuant eliminates that overhead through a two-step process. First, it randomly rotates data vectors to simplify their geometry (the PolarQuant step). Then it applies extreme compression without needing to store those constants separately. The result is a 6x compression ratio that holds across tasks.
The reception on HN and r/MachineLearning was cautiously optimistic. The cost implications are significant, since GPU memory is one of the main bottlenecks and cost drivers in AI inference. A few people questioned whether “zero accuracy loss” truly holds across all task types and edge cases, which is a fair challenge for any efficiency claim. But the peer-reviewed framing (ICLR acceptance) gives it more credibility than a typical research blog post.
This is the kind of infrastructure-layer research that doesn’t make headlines but could materially change what AI deployment looks like in two years. If 6x memory reduction proves out at scale, it significantly lowers the hardware barrier to running large models.
Claude Mythos (also referred to as Capybara in Anthropic’s leaked draft) is a new tier of AI model above Opus, currently the company’s most capable public offering. According to the leaked materials, it delivers dramatically higher benchmark scores in coding, reasoning, and cybersecurity tasks, and is currently in testing with early access customers. Anthropic confirmed its existence but has not officially announced a release date.
U.S. District Judge Rita Lin ruled that the Trump administration’s decision to brand Anthropic as a supply chain risk, after the company refused to allow its AI in autonomous weapons, appeared arbitrary, punitive, and potentially in violation of First Amendment protections. The temporary order blocks both the supply chain designation and a directive banning all federal agency use of Claude.
According to Bloomberg’s Mark Gurman, iOS 27 will let users set any approved third-party AI service, including Claude or Gemini, as their default Siri assistant, ending ChatGPT’s current exclusive arrangement. In practice, this could mean much more capable AI responses through Siri. The main unknowns are which services Apple approves, and how on-device privacy is handled when queries route through external AI providers.
TurboQuant is a vector quantization algorithm published by Google Research that reduces the memory footprint of AI model key-value caches by up to 6x, with no measurable accuracy loss. It achieves this by rotating data vectors to simplify their geometry before applying compression, eliminating the quantization constant overhead that normally offsets the gains from standard compression techniques.
Stay on top of the AI stories that actually matter at FridayAIClub.com — published daily at 8AM London time.