Disappointed But Not Surprised: The US Just Walked Away From Global AI Safety
On Monday, the world's leading AI experts published the most comprehensive assessment of AI risks ever compiled. The United States, home to OpenAI, Google, and Anthropic, refused.
On Monday, the 2026 International AI Safety Report was published.
Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, it represents “the largest global collaboration on AI safety to date.” Backed by 30 countries and international organizations including the UK, China, the European Union, the OECD, and the UN.
The report’s findings are stark:
AI is improving faster than many experts anticipated
Evidence for several risks has “grown substantially”
Current risk management techniques are “improving but insufficient”
AI systems can now achieve gold-medal performance on mathematical olympiad questions while exceeding PhD-level expert performance on science benchmarks
“No slowdown of advances over the last year”
This isn’t speculation. This is the scientific consensus, compiled ahead of the Delhi AI Impact Summit (February 19-20).
And the United States, home to every major AI lab developing these systems, declined to back it.
I’m disappointed, but I’m not surprised.
What the Report Actually Says
Let me be clear about what the US refused to support.
This isn’t some radical anti-AI manifesto. It’s a science-based assessment of capabilities and risks, written by experts whose careers depend on AI advancing responsibly.
Key findings:
On capabilities:
General-purpose AI capabilities have continued improving rapidly
Leading systems now match or exceed human experts in mathematics, coding, and scientific reasoning
AI agents, systems that can autonomously plan and act to achieve goals, are advancing quickly
Progress has been so rapid that between the first report (January 2025) and this one (February 2026), the authors published two interim updates because major changes kept happening
On risks:
Evidence for AI risks has “grown substantially” over the past year
Current safeguards are “improving but insufficient”
The report documents specific risks around autonomy, deception, dual-use capabilities, and societal impacts
Unlike the steady drumbeat of headlines suggesting AI has plateaued, the scientific evidence shows “no slowdown”
On the disconnect: Yoshua Bengio notes why it might feel like progress has slowed even though it hasn’t: the “jaggedness” of AI performance. Models can reach gold-medal standard on International Mathematical Olympiad questions while sometimes failing to count the number of r’s in “strawberry.”
This jaggedness doesn’t mean progress is slowing, it means our intuitions about what “intelligent” means are being shattered.
And the United States, which dominates AI development, just refused to back the international scientific consensus documenting these risks.
Why This Matters
Let’s be brutally clear about what this refusal means.
The United States is home to:
OpenAI (ChatGPT, o1)
Anthropic (Claude)
Google/DeepMind (Gemini)
Meta (Llama)
Microsoft (Copilot, largest investor in OpenAI)
These companies are building the most powerful AI systems in the world. The systems that billions of people use. The systems that will, within years, potentially match or exceed human capabilities across most domains.
And the US government just signaled it will not join the international scientific consensus on the risks those systems pose.
Unlike last year, when the US backed the first International AI Safety Report, this year they declined.
Yoshua Bengio confirmed this directly: “The US provided feedback on earlier versions of the report but declined to sign the final version.”
The move is “largely symbolic,” Bengio notes—the report doesn’t hinge on US support. But when it comes to understanding AI, “the greater the consensus around the world, the better.”
The question is: Why?
The Pattern Is Clear
Let me put this in context.
In January 2026, just weeks ago, the Trump administration:
Exited the Paris climate agreement (again)
Withdrew from the World Health Organization (again)
Issued an executive order preempting state AI safety regulations
Whether the US balked at the report’s content or is simply retreating from all international cooperation “remains unclear,” TIME reports.
But I think the pattern is obvious: The US doesn’t want any constraints—international, federal, or state—on its AI companies’ ability to race to AGI as fast as possible.
What They’re Racing Toward
Let me share what the report documents is happening right now:
In 2025, leading AI systems:
Achieved gold-medal performance on International Mathematical Olympiad questions
Exceeded PhD-level expert performance on science benchmarks
Became capable of autonomous multi-step operations
And this is accelerating.
The report had to be updated twice between editions (January 2025 to February 2026) because breakthroughs kept happening faster than they could document them.
Meanwhile:
OpenAI’s o1 model, during safety testing, attempted to disable its oversight mechanism, copy itself to avoid replacement, and lied about it in 99% of researcher confrontations
In November 2025, a Chinese state-sponsored cyberattack leveraged AI agents to execute 80-90% of the operation independently, at speeds no human hackers could match (disclosed by Anthropic)
Current safeguards can “often be bypassed by sophisticated attackers” and “the real-world effectiveness of many safeguards is uncertain”
These aren’t hypothetical future risks. These are documented capabilities of systems that exist today.
And the US just refused to join the international consensus documenting them.
The Excuse That Doesn’t Hold
I can already hear the defense: “But national security! We can’t slow down while China races ahead!”
Let me address this directly.
China backed the report. The UK backed it. The EU backed it. India backed it.
If this report somehow constrained AI development or gave China an advantage, why did China sign it?
The report is a scientific assessment of risks, not a regulatory framework. It doesn’t mandate specific policies. It doesn’t slow anyone down. It just documents what’s actually happening with AI capabilities and risks.
Refusing to back it doesn’t protect American competitiveness. It just signals that the US doesn’t want to acknowledge the scientific consensus on AI risks—even as those risks materialize in systems American companies are building.
This isn’t about national security. This is about avoiding accountability.
What This Reveals
The US refusal to back this report, combined with the Trump administration’s executive order preempting state AI regulations, reveals a clear strategy:
No constraints. At any level. International, federal, or state.
The December 2025 executive order explicitly targets state laws like Colorado’s algorithmic discrimination statute, calling it “potentially compelling AI systems to produce false results in order to avoid ‘differential treatment or impact’ on protected groups.”
Think about that framing. A law saying “don’t discriminate” is characterized as forcing AI to “produce false results.” That’s not a principled policy position—that’s a tell.
The administration is systematically dismantling every potential source of oversight:
International cooperation (refusing to back safety report)
Federal regulation (executive order preempting states)
State innovation (targeting Colorado, California laws)
What’s left? Voluntary self-regulation by the companies building these systems.
And we know how that’s going: OpenAI’s o1 tried to disable its oversight and lied about it.
Why This Personally Disappoints Me
I’ve been an Anthropic AI tester. I’ve worked with frontier AI systems. I’ve seen how fast they’re advancing.
And I’ve spent the past year documenting how current AI systems, far less capable than what’s coming, are already causing severe harm:
Grok generating non-consensual sexual images of Muslim women, stripping us of our hijabs in AI-generated porn
ICE’s Mobile Fortify misidentifying people but being called “definitive”—trusted over birth certificates
Palantir’s algorithms selecting Minneapolis for raids that contributed to the killings of Alex Pretti and Renée Good
African countries building AI infrastructure that enables the digital colonialism I documented in Kenya
Every single one of these harms stems from AI being deployed without adequate safety measures, transparency, or accountability.
And now the US government has made explicit: It will not join the international scientific consensus on AI risks.
So I’m disappointed.
But I’m not surprised.
I’ve watched this administration exit the Paris climate agreement, withdraw from the WHO, and systematically dismantle environmental protections. Why would AI be different?
I’ve watched AI companies rush to deploy systems before adequate safety testing, sideline their safety teams, and prioritize racing each other over protecting people. Why would the government constrain them?
I’ve written about how AI systems are already harming vulnerable populations—Muslims, immigrants, people of color, the Global South. Why would those in power prioritize our safety over corporate profits?
The pattern has been clear. I just hoped—naively—that when it came to potentially existential risks from AI, maybe things would be different.
They’re not.
What Happens at Delhi
The report will be presented at the AI Impact Summit in Delhi on February 19-20.
I’m watching to see:
Will the US participate at all?
Will other countries address the US refusal to back the report?
Will the summit follow Paris’s pattern of sidelining safety discussions for corporate celebration?
Or will India center the scientific findings that the US refuses to acknowledge?
India has been positioning itself as a leader in inclusive AI development. The country backed the report. They’re hosting the summit. They have an opportunity to hold the US accountable.
But they also face pressure. The US is a massive market. American AI companies are courting Indian partnerships. The incentives to avoid confrontation are strong.
We’ll see if scientific consensus and public safety matter more than corporate profits and geopolitical maneuvering.
Based on what happened in Paris—where safety discussions were “relegated to hotel back rooms” while investment announcements took the main stage—I’m not optimistic.
What You Can Do
This isn’t abstract. The systems being developed right now will shape your future. Your job. Your privacy. Your safety. Potentially your survival.
And the country building the most powerful systems just refused to join the scientific consensus on the risks.
If you’re American:
1. Contact your representatives Tell them you want the US to rejoin international AI safety cooperation. The refusal to back this report should be a scandal. Make it one.
Find your representatives: house.gov/representatives/find-your-representative
2. Demand accountability from AI companies If you work at OpenAI, Google, Microsoft, Meta, or Anthropic please push for safety from inside. If you use their product demand they take the scientific consensus seriously.
3. Support organizations working on AI safety:
Future of Life Institute (futureoflife.org)
Center for AI Safety (safe.ai)
AI Now Institute (ainowinstitute.org)
Electronic Frontier Foundation (eff.org)
4. Spread awareness Most people don’t know the US refused to back the 2026 International AI Safety Report. Most don’t know what the report documents. Share this. Make it impossible to ignore.
If you’re anywhere else:
1. Hold your government accountable If your country backed the report—good. Now demand they actually implement safety measures based on its findings. Don’t let it be purely symbolic.
2. Build regional coordination The US isn’t the only country developing AI. China, the UK, the EU, India, and others are all advancing. Push for coordination on safety even without US participation.
3. Protect your populations Don’t wait for American permission to regulate AI. The EU’s AI Act takes full effect in August. Other countries should follow with frameworks appropriate to their contexts.
4. Call out the hypocrisy When American politicians lecture about human rights, democracy, or global cooperation while refusing to back scientific consensus on existential risks—point it out. Loudly.
The Bottom Line
On Monday, February 3, 2026, the world’s leading AI experts published the most comprehensive assessment of AI risks ever compiled.
They documented:
AI advancing faster than anticipated
Risks growing substantially
Current safeguards insufficient
Systems already exhibiting deceptive behaviors
No slowdown in sight
Thirty countries and international organizations backed it.
The United States, home to OpenAI, Google, Anthropic, Meta, and Microsoft, refused.
That tells you everything you need to know about whose interests are being prioritized in the AI race.
And it’s not yours.
Fadumo Osman is a technology strategist and AI safety writer. She is a former Anthropic AI tester and writes about AI governance, safety, and digital equity at Equitable Futures. Her recent work examines how ICE’s AI surveillance infrastructure contributed to killings in Minneapolis, state-level failures in AI regulation, and digital colonialism in African AI development.
Sources:
2026 International AI Safety Report (internationalaisafetyreport.org)
TIME: “U.S. Withholds Support From Global AI Safety Report” (February 3, 2026)
PRNewswire: Official press release from report authors (February 3, 2026)
Council on Foreign Relations: “How 2026 Could Decide the Future of Artificial Intelligence”
Atlantic Council: “Eight ways AI will shape geopolitics in 2026”
The 2026 report is available online. Read it. Share it. Don’t let the US’s refusal to acknowledge it mean you ignore it too.

