Even prior to the release of DeepSeek-R1, a contentious policy debate was taking shape: should open-source AI be regulated? In the United States, this debate has been dominated by two competing perspectives. One emphasizes geopolitical risk and global power dynamics, with a focus on Chinese misuse of U.S. open-source AI. The other is rooted in ideological values — innovation, transparency, and democracy — championed by the open-source community. U.S. policymakers face the formidable task of reconciling these seemingly competing priorities. If policymakers wish to balance geopolitical and ideological considerations, export controls on open-source models are not the solution. Such attempts to partially limit access to information would likely be porous and ineffective, while potentially disrupting innovation and American influence. A more effective alternative would be to focus on risk endemic in each model and thus determine an appropriate mode of release, instead of trying to prevent specific actors from accessing public information.
Export Controls on Open-Source Models Will Not Win the AI Race



