Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Tuesday, 31 March 2026. Your daily digest of what’s happening at the frontier of artificial intelligence, and today Anthropic is somehow involved in nearly all of it.

Anthropic didn’t plan to tell you about its next flagship model this week. A misconfigured content management system made the decision for them. On Thursday, Fortune reported that close to 3,000 unpublished assets from Anthropic’s blog were sitting in a publicly searchable data cache, including a draft blog post announcing a model called Claude Mythos, also referred to internally as Capybara.
What the draft said is worth reading carefully. Anthropic described Mythos as “by far the most powerful AI model we’ve ever developed,” a new tier above Opus, priced higher and significantly more capable. The model “gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity” than Claude Opus 4.6, according to the leaked document. Anthropic confirmed the model’s existence after Fortune contacted them, saying it represents “a step change” and “the most capable we’ve built to date.”
But here’s where it gets complicated. The same draft explicitly flags that Mythos poses unprecedented cybersecurity risks, that it is “currently far ahead of any other AI model in cyber capabilities” and “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.” Anthropic built something it believes hackers could use to run large-scale attacks, and rather than a broad public release, it’s rolling Mythos out first to security teams to help them prepare. That’s the company’s safety-first posture in action. It’s also a pretty awkward situation to have that information leak through a CMS misconfiguration.
On X, the leak was treated as the most significant model disclosure in years. Multiple security researchers independently verified the data, which gave the story more weight than a typical pre-launch rumour. The general mood: genuinely excited about the capability jump, genuinely concerned about what comes next. A few skeptics noted that Anthropic has used “step change” language before, but this time the leaked documentation from Cambridge and LayerX researchers added credibility that’s hard to dismiss.

Also this week: Anthropic shipped Claude Cowork, a new research-preview feature that gives Claude access to a folder on your Mac, letting it read, write, edit, and create files autonomously. The pitch is that you don’t need to be a developer to delegate real work to an AI agent. Give Claude your downloads folder, your expense screenshots, or a pile of research notes, and it handles the rest, reorganising and drafting, while keeping you updated in the background.
Right now it’s limited to Claude Max subscribers ($100–$200/month) via the macOS app, with a waitlist for everyone else. Microsoft simultaneously launched its own Copilot Cowork, built on the same Claude infrastructure, so the enterprise signal here is real. Anthropic itself acknowledged the risks: prompt injection attacks, the possibility of accidental file deletion, and the general messiness of giving an AI access to your local machine. “Agent safety is still an active area of development,” the company wrote. That’s an honest admission, and worth taking seriously.
Developer reaction on X ranged from “write, compile, launch, click, debug in one prompt: this is it” to more measured takes about whether $100/month is a reasonable threshold for what is, at launch, still a research preview. Reddit was split. Some saw it as a genuine step toward practical AI delegation for knowledge workers. Others flagged that the same prompt injection vulnerabilities Anthropic warned about remain unsolved. Both camps are right, which is roughly where we are with every agentic AI tool right now. If you want to explore this space, our guide on AI agent infrastructure is a good starting point.

This one has been building for weeks, and it landed in court on Monday. Anthropic has filed suit in a California district court against the US Department of Defense, challenging its designation as a supply-chain risk, a label typically applied to foreign companies suspected of posing cybersecurity threats or national security risks. Being an American company didn’t protect Anthropic. The designation followed the company’s refusal to remove its guidelines against using Claude for fully autonomous weapons and mass domestic surveillance.
The lawsuit argues that the government’s actions penalise protected speech under the First Amendment and violate Fifth Amendment rights. Beyond that, Anthropic says the executive order requiring all federal agencies to stop using its technology within six months exceeds the authority of the executive branch. The company is seeking to have the supply-chain designation overturned.
The fallout has already been significant. The General Services Administration terminated its OneGov contract, cutting Anthropic services from all three branches of the federal government. The Treasury and State Departments have also reportedly cut ties. Microsoft, one of Anthropic’s biggest clients, has continued working with the company but built separate processes to ensure its Anthropic-related work has no overlap with Pentagon contracts. A court decision is expected imminently. Legal observers, including Lawfare (which has been live-posting the proceedings), see this as a precedent-setting case for whether an AI company can be economically destroyed for expressing a view on AI safety.

Away from the Anthropic circus, a quieter but arguably more significant deal closed this week. Eli Lilly has reached a $2.75 billion agreement with Insilico Medicine, a Hong Kong-based AI drug discovery company, to bring Insilico’s AI-developed therapeutics to the global market. The structure is $115 million upfront, with the remainder tied to regulatory and commercial milestones, plus royalties on sales.
Insilico has developed at least 28 drugs using generative AI tools, nearly half of them already in clinical trials. The company went public in Hong Kong in December and its shares are up more than 50% year-to-date. This deal is an escalation of a software licensing collaboration the two companies began in 2023; Lilly is now backing Insilico’s pipeline at commercial scale. CEO Alex Zhavoronkov noted that Insilico’s AI work happens outside China, in Canada and the Middle East, while early preclinical development runs in China.
The biotech and AI communities were broadly positive on this one. The deal structure with milestone-linked payments is being read as credible rather than inflated; it’s not a headline number masking a small check. Discussion threads noted Lilly attended a high-level forum in Beijing earlier this month and has committed $3 billion to China investment over the next decade, so this deal fits a broader strategic picture. For people tracking whether AI-discovered drugs can reach commercial scale: this is the clearest answer yet that the answer is yes.

Apple Intelligence has been live in the US since October 2024. For roughly 18 months, China was the only major market where iPhone users had no access, stuck in a regulatory standoff requiring Apple to partner with local AI companies before launching. On Monday, that appeared to change. User reports across Chinese social media confirmed Apple Intelligence had gone live, seemingly triggered by iOS 26.4.
Then Bloomberg’s Mark Gurman reported it was a mistake. Apple pulled the rollout within hours. The features are apparently ready, and have been for months, but regulatory approval is still pending. Alibaba’s partnership for local AI compliance remains unresolved.
Chinese users who briefly celebrated were, understandably, frustrated. The brief rollout did confirm one thing: Apple’s systems are technically ready, and what’s blocking the China launch is entirely political. Some observers on X read Gurman’s “error” framing as a signal that regulatory approval might be imminent. An accidental rollout is a strange thing to trigger if the underlying approval is still months away. We’ll see.
Mythos, also called Capybara internally, is Anthropic’s next flagship AI model, currently in early-access testing with select enterprise customers. According to a leaked draft blog post, it sits above the Opus tier in capability and cost. Anthropic has confirmed its existence but hasn’t announced a public release date. Given the cybersecurity concerns the company flagged in the leaked document, the rollout is expected to be deliberate and phased.
Claude Cowork is a research-preview feature that lets Claude access a folder on your Mac to read, write, and organise files autonomously. It’s designed for non-developers, unlike Claude Code, which is aimed at engineers. Cowork is currently only available for Claude Max subscribers via the macOS app. Both tools share the same underlying agent capabilities; Cowork wraps them in a form accessible to knowledge workers who don’t write code.
Anthropic was designated a supply-chain risk by the Pentagon after refusing to remove its policies against using Claude for autonomous weapons and mass surveillance. The designation, which led to a presidential order banning all federal agencies from using Anthropic’s technology, is typically applied to foreign entities posing security threats, not US companies. Anthropic argues the action violates its First Amendment right to hold views on AI safety, and its Fifth Amendment protections. The case is currently before a California district court.
The $2.75 billion deal is the largest commercial agreement yet for AI-discovered drugs and a meaningful proof point that AI-developed therapeutics can reach global commercial scale. Insilico has 28 AI-generated drug candidates, nearly half in clinical trials. The deal validates the commercial case for AI in pharma across the full pipeline, from research to market.
That’s the wrap for 31 March 2026. Explore the full archive of daily AI coverage at FridayAIClub.com. We publish every morning so you don’t have to keep up with everything yourself.