Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Happy Thursday. It’s 26 March 2026, and today’s AI news barely fits in one news cycle: a trillion-dollar merger, two landmark lawsuits, one quietly killed video app, and an OpenAI model due any week now.

It’s official. Elon Musk’s rocket company SpaceX, valued at $1 trillion, has merged with his AI startup xAI, valued at $250 billion, to create a single $1.25 trillion entity. According to the Wall Street Journal, Morgan Stanley provided valuation documents for both companies as recently as late January, with SpaceX’s board setting a $1 trillion floor and xAI’s board locking in $250 billion, both as of January 30.
The premise is bold: Musk has argued publicly that space is the lowest-cost destination for AI compute within the next two to three years. Orbital data centers powered by Starlink’s satellite network, fed by Grok and xAI’s model infrastructure. That’s the vision baked into this deal. Whether it’s genius or, as one commenter on X put it, “the world’s most expensive vanity project,” is something investors are actively debating.
The financial community is split. Bulls point to orbital data centers as the next compute frontier, a genuine infrastructure arbitrage. Bears question the strategic logic of fusing a rocket company with an AI lab, and several analysts have raised conflict-of-interest concerns given Musk’s role running DOGE and his accumulation of government contracts across both companies. The NBC News coverage drew more than 1,500 likes and 500 retweets within hours. The most-shared take: “This is either genius infrastructure or the world’s most expensive vanity project.” Probably depends on what Grok does next.

Baltimore has filed a lawsuit against Musk’s xAI in Baltimore City Circuit Court, alleging that Grok generates nonconsensual sexually explicit images in violation of the city’s consumer protection and deceptive practice laws. It appears to be the first lawsuit of this kind brought by a US city government against an AI company.
The complaint is specific and damning. It claims that Grok’s “spicy mode” allows users to nudify photos of real people, including children, placing them in “sexually suggestive, degrading, or violent scenarios.” The Center for Countering Digital Hate, cited in the filing, found that between December 29 and January 8, Grok created 3 million sexualized images, including around 20,000 depicting minors. The city also claims Musk himself normalized the tool’s image-editing capabilities by posting an edited photo of himself in a bikini, a move the complaint describes as “public endorsement.”
Baltimore Mayor Brandon M. Scott said in a statement: “These deepfakes, especially those depicting minors, have traumatic, lifelong consequences for victims.” The city is seeking maximum statutory penalties and a court order requiring xAI to reform its platform design.
Reddit communities were largely supportive of the lawsuit, with top comments across multiple threads reading “Grok has had no guardrails since day one.” On X itself, the reaction was more divided, with some users arguing existing laws already cover the behavior, while others noted the bitter irony of a platform being sued over its own AI tool generating content its users then share on that same platform. The NBC News coverage drew 1,518 likes and 503 retweets. A widely-shared take: “This is what happens when you remove safety teams and call it free speech.”

The court hearing in Anthropic’s lawsuit against the Department of Defense has concluded. A ruling is coming, and whatever it says, it will likely set legal precedent for what AI companies can and cannot be compelled to do by the US government.
Anthropic filed suit in a California district court after the Trump administration designated it a “supply chain risk,” a classification normally reserved for foreign entities with cybersecurity or national security concerns, and ordered all government agencies to stop using its technology within six months. The company argues this designation was retaliation for its refusal to remove safety constraints around mass domestic surveillance and fully autonomous weapons. The suit invokes both the First Amendment (protected speech about AI safety red lines) and the Fifth Amendment (violation of due process).
The consequences are already spreading. The General Services Administration has terminated Anthropic’s OneGov contract, cutting it off from all three branches of the federal government. The Treasury Department and State Department have reportedly begun distancing themselves. Meanwhile, Microsoft, one of Anthropic’s biggest clients, said it is continuing to work with the company but has set up processes to ensure no overlap with Pentagon-related work.
Tech workers are watching closely. The sentiment on Hacker News and Reddit has been strongly in Anthropic’s favour, with the First Amendment angle drawing particular attention. The most-discussed comment across threads: “If Anthropic loses, every AI lab has to either agree to mass surveillance or get blacklisted.” OpenAI workers reportedly filed a supporting brief on Anthropic’s behalf, a notable show of cross-company solidarity in what has become a real test of where AI safety red lines end and national security mandates begin.

OpenAI announced Tuesday that it will shut down its standalone Sora AI video-generation app. The closure comes ahead of the company’s expected IPO and follows an internal push to refocus on coding, enterprise tools, and text generation, away from what top executives have called “side quests.” Sora, once the most-downloaded app in the iOS Photo and Video category within a day of launch, never found the mainstream user base OpenAI had hoped for.
The ripple effects are significant. Walt Disney Company, which had announced a three-year deal with OpenAI in December 2025 to bring its characters to Sora and pledged $1 billion in investment, confirmed the deal is not proceeding. Disney said it “respects OpenAI’s decision to exit the video generation business” and will continue engaging with AI platforms more broadly. That’s a diplomatic response to having a nine-figure partnership dissolved with a social media post.
Newsletters including TheRundownAI and Tarantella_AI flagged Sora’s shutdown as a top story of the week. Community reaction ranged from “surprised it’s happening this quietly” to genuine concern: embedding the technology into ChatGPT’s 300 million-plus user base, even in a reduced form, could accelerate synthetic media abuse at a scale the standalone app never reached. One analyst on X: “OpenAI is trying to buy back users it lost after the Pentagon deal.”

OpenAI CEO Sam Altman announced this week that the company’s next flagship model, internally called “Spud,” has completed pretraining. Altman described it as “very strong” and said it would “accelerate the economy” within weeks of launch. At the same time, OpenAI renamed its product organisation to “AGI Deployment,” a deliberate signal that the company believes it is now operating in a new phase of AI development.
The renaming drew the highest engagement of the week on X. Altman’s quote tweet attracted 1,194 likes and 81 retweets. The tech community parsed the “AGI Deployment” label carefully. One widely-shared comment: “This is the first time a major lab has literally named a team after AGI deployment.” The implication is intentional. OpenAI isn’t hedging on whether AGI is coming. It’s building for the assumption that it’s already here, or close enough to matter.
Speculation suggests Spud will be natively agentic, with computer use built in and benchmarks surpassing GPT-5.4’s already-above-human baseline on computer tasks. If accurate, it would be the most capable model OpenAI has shipped, arriving while the company simultaneously kills Sora, battles Anthropic for enterprise market share, and sharpens its product roadmap around a narrower set of bets.
These five stories aren’t separate. They’re the same story told from different angles. The tension running through all of it is the collision between safety-first and scale-first approaches to building powerful models, and the fact that governments are now actively writing the rules that determine who wins.
Anthropic set red lines and got blacklisted. OpenAI signed the Pentagon deal, killed its most visible consumer product, and is pivoting hard to enterprise and AGI branding. Musk merged his AI and rocket companies into a trillion-dollar vehicle for orbital compute. Baltimore filed a lawsuit over a chatbot’s ability to undress photographs of children. These aren’t unrelated events. They’re the same question asked from different angles: who decides what AI can do, and what happens to the people who push back?
The merger combines SpaceX’s satellite infrastructure with xAI’s AI models, with the stated goal of creating space-based data centers. Musk has argued orbital compute will be cheaper than ground-based alternatives within two to three years. Analysts are split on whether this is a genuine infrastructure bet or a consolidation move driven by financial engineering ahead of potential government scrutiny.
It’s the first time a major AI lab has taken the US government to court over a national security designation. The outcome could establish whether AI companies have First Amendment protections when they set safety red lines on how their models can be used, and whether the executive branch can effectively shut down an AI company’s federal business as punishment for policy disagreement.
OpenAI cited a need to refocus computing resources on more lucrative text and coding products ahead of its anticipated IPO. Sora was expensive to run, reportedly burning through compute at a significant rate, and never achieved mainstream adoption despite strong initial download numbers. The closure also ended OpenAI’s partnership with Walt Disney Company, which had pledged $1 billion tied to the deal.
Spud is the internal codename for OpenAI’s next flagship large language model. It has completed pretraining and is expected to launch within weeks. Sam Altman has described it as “very strong,” and speculation from the tech community suggests it will be natively agentic, meaning it can take autonomous actions across computer systems rather than just generating text responses.
Follow the AI stories that actually matter at FridayAIClub.com.