Are We Ready for a ‘DeepSeek for Bioweapons’? (Steven Adler – Lawfare)

The announcement of a powerful new artificial intelligence (AI) model is a leading indicator that many similar AI models are close behind. The January 2025 release from the Chinese company DeepSeek is an example of the small gap between when an AI ability is first demonstrated and when others can match it: Only four months earlier, OpenAI had previewed their then-leading o1 “reasoning model,” which used a new approach for getting the model to think harder. Within months, the much smaller DeepSeek had roughly matched OpenAI’s results, and in doing so indicated that Chinese AI companies may not be far behind those in the U.S. In that case, matching o1’s abilities posed little specific risk, even though DeepSeek took a different approach to safety than did the leading Western companies (for instance, DeepSeek’s model is freely downloadable by anyone, and so has fewer protections against misuse). The replicated abilities were general reasoning skills, not something outright dangerous. In contrast, the abilities feared by the leading AI companies tend to be more specific, like helping people to cause harm with bioweapons.

Are We Ready for a ‘DeepSeek for Bioweapons’? | Lawfare

Latest articles

Related articles