Claude Mythos Leaked, Anthropic Sues the Pentagon, and Eli Lilly Signs a $2.75B AI Drug Deal

Wednesday, 1 April 2026, and no, none of this is an April Fools joke. Anthropic is at the centre of almost every story today: a leaked model, a new consumer agent, and a lawsuit against the Pentagon. There’s also the largest AI drug deal ever signed, and Apple’s accidental China moment.

Anthropic’s secret ‘Claude Mythos’ model leaked, and it sounds like a step change

Anthropic Claude Mythos model leak — a new tier of AI beyond Opus

Anthropic didn’t plan to announce its most powerful model yet. A misconfigured content management system left nearly 3,000 unpublished assets publicly searchable, including a draft blog post describing a new model called Claude Mythos, also referred to internally as “Capybara.”

The leaked document, reviewed by Fortune and independently verified by security researchers at LayerX Security and the University of Cambridge, describes Mythos as “by far the most powerful AI model we’ve ever developed.” Anthropic confirmed the model exists and is currently in early-access testing. The company called it “a step change,” its most capable build to date.

The capability numbers are one thing. The leaked draft also notes that Mythos “gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity” compared to Claude Opus 4.6. But in the same breath, Anthropic apparently flagged the model as posing unprecedented cybersecurity risks. That last part is what researchers on Hacker News and X latched onto immediately: not the capability jump, but the self-reported danger flag from a safety-first company.

The community reaction was split pretty evenly between excitement and alarm. There was genuine excitement on X, with the leak treated as the most significant accidental model disclosure in years. Multiple security researchers independently verified the data. But skeptics noted the obvious irony: a frontier AI safety lab accidentally leaking its most powerful and potentially dangerous model through a misconfigured public database. One widely circulated take put it plainly: “If your own AI coding tool accidentally exposes your source code, that’s a pointed story.” The embarrassment aside, it confirmed what the community had suspected. There’s a tier above Opus in the works, and it’s called Capybara.

Claude Cowork brings agentic AI to non-coders, but with some real caveats

Anthropic Claude Cowork feature — AI agent for file and workflow tasks

Separate from the leak drama, Anthropic shipped something new on Monday: Claude Cowork, a feature that gives Claude access to a folder on your computer and lets it handle multi-step tasks autonomously. Think reorganising downloads, pulling expenses from a pile of screenshots, or drafting a report from scattered notes.

Anthropic didn’t build this for developers. The company explicitly framed it as “a more approachable form” of what Claude Code already does for coders, with the goal of making agentic AI accessible to everyone else. Users can queue tasks, let Claude work in parallel, and check in without micromanaging. It connects to external tools too: Asana, Notion, PayPal, and more via existing connectors.

The catch? It’s currently only available to Claude Max subscribers ($100–$200/month) on macOS. That pricing drew some eye-rolls on X, though developers who tested it were genuinely impressed. “Write, compile, launch, click, debug” in a single prompt was one reaction. Reddit was more measured: enthusiasm for the capability, but concern about the prompt injection risks Anthropic itself acknowledged. The company’s own announcement included a warning that Claude can take “potentially destructive actions” if instructions aren’t clear, and that agent safety “is still an active area of development.” Worth keeping that in mind before handing a folder over.

Microsoft rolled out Copilot Cowork (powered by Claude) through its M365 Frontier program on the same day, which suggests this goes beyond a consumer experiment. Enterprise workflows are the real target.

Anthropic sues the Department of Defense over its Pentagon blacklisting

Anthropic lawsuit against the US Department of Defense over AI supply chain risk designation

Anthropic filed a lawsuit in a California district court against the US government this week, challenging its designation as a military supply-chain risk. The Trump administration placed Anthropic on the list after the company set limits on how its AI could be used, specifically refusing to enable mass domestic surveillance and fully autonomous weapons systems.

The suit argues the government violated the First Amendment by retaliating against Anthropic for its publicly stated AI safety positions. “The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance — AI safety,” the complaint reads. Anthropic also argues the executive order demanding all agencies drop its tools within six months falls outside the authority of the executive branch.

The fallout has been real: the General Services Administration terminated its OneGov contract, cutting Anthropic access across all three branches of federal government. The Treasury and State Department have reportedly followed suit. Meanwhile, the company’s biggest commercial client, Microsoft, has said it’s keeping Anthropic but ring-fencing any work connected to Pentagon contracts.

Legal and tech communities are watching closely. Many see it as a precedent-setting case: the first time a frontier AI lab has gone to court over where safety “red lines” end and government procurement rights begin. Lawfare was live-posting the hearing. The court decision is expected soon. Underneath the legal arguments, there’s something worth sitting with: an American company can apparently be designated a national security risk for disagreeing with its government’s AI policy.

Eli Lilly signs a $2.75B deal for AI-developed drugs, the biggest yet

Eli Lilly Insilico Medicine $2.75 billion AI drug discovery deal

Away from Anthropic’s eventful week, the pharma world produced its own headline: Eli Lilly has signed a $2.75 billion deal with Hong Kong-based Insilico Medicine to bring AI-discovered drugs to the global market. It’s the largest deal of its kind.

Insilico has developed at least 28 drugs using generative AI tools, with nearly half already in clinical trials. The deal gives Insilico $115 million upfront, with the remaining $2.635 billion tied to regulatory and commercial milestones. Eli Lilly CEO David Ricks attended a high-level forum in Beijing earlier this month, in line with the company’s broader $3 billion China investment plan over the next decade. Insilico does its AI work in Canada and the Middle East, with early preclinical development conducted in China.

The biotech community was broadly positive: Insilico’s shares are up more than 50% year-to-date. There’s some healthy skepticism about the milestone-heavy deal structure (the gap between $115M upfront and $2.75B total is significant), but most analysts see this as meaningful validation that AI-discovered therapeutics can reach commercial scale. Worth noting too: Lilly and Insilico have been working together since 2023 under a software licensing agreement. This deal is an escalation, not a cold approach.

Apple Intelligence launched in China by accident, then got pulled

Apple Intelligence in China — accidental launch then pulled by Apple

On Monday, users in China started reporting that Apple Intelligence had gone live on their iPhones. For a few hours, it looked like a significant moment: China was the last major market without access to Apple’s AI features, and an 18-month wait seemed to finally be over.

Then Mark Gurman reported it was a mistake. Apple pulled the rollout, and the company remains stuck in regulatory limbo, awaiting government approval despite the features reportedly being technically ready for months. The Alibaba partnership required for local AI compliance is still pending sign-off from Beijing.

Chinese users were briefly excited before the features disappeared. Frustration was the dominant mood afterward. Some observers read the accidental rollout as confirmation that Apple’s systems are ready and the delay is entirely political. Others on X made a darker read: if Apple can’t navigate this regulatory bottleneck, it’s a meaningful strategic vulnerability for a company that has staked significant revenue on its China business. The accidental launch showed the gap between “technically ready” and “politically cleared” is wider than Apple would like.

Frequently asked questions

What is Claude Mythos and when is it being released?

Claude Mythos (also referred to as “Capybara”) is Anthropic’s most powerful AI model to date, sitting above the existing Opus tier. It was revealed through an accidental data leak in March 2026. Anthropic confirmed it is currently in early-access testing with select customers. No public release date has been announced. Anthropic has indicated it is being “deliberate” about the release process given the model’s capabilities and the cybersecurity risks it poses.

What is the Anthropic vs Department of Defense lawsuit about?

Anthropic sued the US government after the Trump administration designated the company a military supply-chain risk and ordered all government agencies to stop using its AI within six months. The designation followed Anthropic’s refusal to enable mass surveillance and autonomous weapons. The lawsuit argues the government violated First and Fifth Amendment rights by retaliating against Anthropic for its AI safety stance.

What is Claude Cowork and who can use it?

Claude Cowork is Anthropic’s new agentic AI feature that gives Claude access to a local folder on your computer to handle multi-step tasks autonomously: organising files, extracting data from screenshots, drafting reports, and more. It is currently available only to Claude Max subscribers ($100–$200/month) on macOS. A waitlist is open for other users. Microsoft’s Copilot Cowork, powered by Claude, is available separately via the M365 Frontier program.

Why is the Eli Lilly and Insilico Medicine deal significant for AI?

The $2.75 billion deal is the largest commercial agreement for AI-discovered drugs in history. It validates the commercial viability of generative AI in drug discovery: Insilico has 28 AI-developed drug candidates, with nearly half in clinical trials. The deal signals that large pharmaceutical companies are willing to put serious money behind AI-originated therapeutics as primary candidates, not just research tools.

The bigger picture

Anthropic dominated this week’s news cycle through sheer velocity: a model leak, a consumer agent launch, a Pentagon lawsuit, and Claude embedded in Microsoft enterprise tools, all within days. The underlying story, as the briefing framed it, is IPO positioning: both OpenAI and Anthropic are racing to demonstrate capability and safety discipline before going public. That tension between shipping fast and fighting legal battles over AI red lines is going to define the next year or two, whether the companies want it to or not.

Explore More from Friday AI Club

For more daily AI coverage, head to FridayAIClub.com.

Share this post!

Leave a Reply

Your email address will not be published. Required fields are marked *