Hello,
This weekend marked a dramatic split in the AI industry.
Anthropic refused to loosen its safety restrictions for the Pentagon—and was labeled a "supply chain risk." OpenAI, meanwhile, secured a deal to deploy its models on classified Defense Department networks.
The Information broke this story as it unfolded, revealing how the standoff developed and what it signals for AI safety and national security. Subscribe now for $299 and save 25% on the first year so you don't miss the next big story.
Anthropic's Standoff
Anthropic CEO Dario Amodei rejected Pentagon demands to allow AI for "any lawful use," insisting on bans against mass domestic surveillance and autonomous weapons without human oversight.
President Donald Trump ordered federal agencies to phase out the company's AI. Defense Secretary Pete Hegseth
declared Anthropic a"supply chain risk," barring military contractors from doing business with the AI startup—an extraordinary step typically reserved for foreign adversaries—and throwing Anthropic's work with partners like
Amazon and Google.
OpenAI's Deal and Altman's Position
OpenAI announced an agreement to deploy its models on classified Pentagon networks, allowing use for "all lawful purposes." Lawyers argue "lawful" leaves room for interpretation. CEO Sam Altman defended the move, saying private companies shouldn't decide national defense outcomes: "I really don't want us to decide what to do if a nuke is coming towards the U.S."
Why It Matters
The dispute marks a break between two of the leading frontier labs on AI safety. For Anthropic, the designation could jeopardize ties with major cloud partners. Inside OpenAI, nearly 100 employees signed a pledge urging leadership to stand with Anthropic.
Stay ahead of the decisions shaping AI's future. Subscribe for
$299 and save 25% on the first year.
For deeper access to exclusive features like Org Charts for OpenAI and Anthropic, as well as our Deep Research tool and more—subscribe to Pro.
0 comentários:
Postar um comentário