Risk thresholds for frontier AI: Insights from the AI Action Summit (Eunseo Dana Choi, Dylan Rogers – OECD.AI)

How many hot days make a heatwave? When do rising water levels become a flood? How many people constitute a crowd? We live in a world defined by thresholds. Thresholds impose order on the messy continuum of reality and help us make decisions. They can be seen as pre-defined points above which additional mitigations are deemed necessary. There is increasing interest in thresholds as a tool for governing advanced AI systems, or frontier AI. AI developers such as Google DeepMind, Meta, and Anthropic have published safety frameworks, including thresholds at which risks from their systems would be unacceptable. The OECD recently conducted an expert survey and public consultation on the topic of thresholds. To deepen this conversation, the UK AI Security Institute (AISI) and the OECD AI Unit convened leading experts at the AI Action Summit to discuss the role of thresholds in AI governance. Representatives from the nuclear and aviation industries joined experts from the Frontier Model Forum, Google DeepMind, Meta, Humane Intelligence, SaferAI, and the EU AI Office. This blog captures some key insights from the discussions.

Risk thresholds for frontier AI: Insights from the AI Action Summit – OECD.AI

Latest articles

Related articles