(Julie Owono – Just Security) The flare-up earlier this year between Anthropic and the U.S. Department of Defense has exposed a deeper crisis: our systems for holding AI development and deployment to account are broken. Companies enforce their own internal rules when convenient; governments regulate when politically expedient. What is missing is an independent public interest layer of accountability: a third way between corporate self-regulation and state enforcement. Replace “AI” with “social media,” “quantum computing,” or “biotech,” and the statement still stands. On one hand, tech companies often show the best intentions, yet do not practice what they preach. Many cheered Anthropic’s CEO, Dario Amodei, for resisting the U.S. government’s use of his company’s models for mass surveillance or fully autonomous weapons systems. But on Feb. 25, the company quietly gutted the central safeguard of its Responsible Scaling Policy: the company’s commitment that it would not deploy or train models until adequate safety mechanisms were in place. That previous pledge, clear in the policy’s 2023–25 versions, virtually disappeared from the 2026 revision. As the company’s chief scientist explained, it no longer “[made] sense for us to make unilateral commitments… if competitors are blazing ahead.”. And while Anthropic was still being celebrated publicly, we learned its model had already been used in U.S. military operations in Venezuela and Iran. – AI Needs Accountability. We Can’t Rely on Companies and Governments Alone.
AI Needs Accountability. We Can’t Rely on Companies and Governments Alone
Related articles



