As artificial intelligence (AI) systems become more capable, they stand to dramatically improve our lives—facilitating scientific discoveries, medical breakthroughs, and economic productivity. But capability is a double-edged sword. Despite their promise, advanced AI systems also threaten to do great harm, whether by accident or because of malicious human use. Many of those closest to the technology warn that the risk of an AI-caused catastrophe is nontrivial. In a 2023 survey of over 2,500 AI experts, the median respondent placed the probability that AI causes an extinction-level event at 5 percent, with 10 percent of respondents placing the risk at 25 percent or higher. Dario Amodei, co-founder and CEO of Anthropic—one of the world’s foremost AI companies—believes the risk to be somewhere between 10 percent and 25 percent. Nobel laureate and Turing Award winner Geoffrey Hinton, the “Godfather of AI,” after once venturing a similar estimate, now places the probability at more than 50 percent. Amodei and Hinton are among the many leading scientists and industry players who have publicly urged that “mitigating the risk of extinction from AI should be a global priority” on par with “pandemics and nuclear war” prevention.
Shared Residual Liability for Frontier AI Firms (Ben Gil Friedman – Lawfare)
Related articles