The Global Eye in dialogue with Emmie Hine, a PhD candidate in Law, Science, and Technology at the University of Bologna, where she researches the ethics and governance of emerging technologies, with a particular focus on China. Emmie holds degrees from Williams College and the University of Oxford, and previously worked as a software engineer.
Can you explain to our readers the evolution of Chinese legislation on AI, in particular on generative AI?
China began thinking about AI governance back in 2017 with the New Generation AI Development Plan and accompanying Three-Year Action Plan, but most of its concrete legislation has come in the past few years. In 2022, it passed a law on “deep synthesis” technology, which included generated text, but mostly focused on deepfakes. This law was finalized on 25 November, and then ChatGPT launched on 30 November, so there was a regulatory scramble to draft a new law specifically addressing generative AI.
Before it was finalized, the generative AI law went through a revision process that watered down specific provisions, like the requirement that generated content be “true and accurate.” Some commentators saw this as signaling an end to China’s “tech crackdown,” but it’s more of an evolution or rebalancing as regulators work to promote innovation and maintain social stability. Large Language Models (LLMs) have to be licensed, and regulators have approved the public release of eight LLMs. They still generally lag behind GPT-4 and other foreign models, but Baidu is working to overcome the implementation gap.
Overall, China’s generative AI legislation provides a solid foundation for regulating these models while still encouraging development. There are still clauses of these laws that will have to be interpreted (for example, not infringing intellectual property rights), but requirements like labeling might help prevent the proliferation of mis- and disinformation that regulators worldwide are concerned about.
The subject of generative AI and its regulation is a complex one, not only for China. Between ethical, economic and national security issues, how can this complexity be addressed ?
China’s regulations explicitly discuss a trade-off that all states have to consider: balancing innovation (or economic development) and disruption (or the flipside, security). States differ in how much they prioritize each one, as well as what tactics they take to get there. Mis- and disinformation are broadly harmful and there are some common-sense regulations to address this that all states can benefit from, like watermarking generated content. Ultimately, though, what other issues states choose to address–from copyright infringement to deepfake pornography–will depend on the state’s priorities and what ethics systems it endorses.
Models of regulation confront each other. What are the differences between Chinese, European and American policies ? And what are the points of contact for a realistic global dialogue?
China is taking a vertical approach to regulation, addressing specific areas of AI–like generative AI and facial recognition–individually. However, there has been a proposal for a more comprehensive AI law that would look more similar to the EU AI Act. The AI Act is intended to be one law to govern all of AI and is an example of horizontal regulation. The EU is also pioneering a risk-based approach, so high-risk AI applications are subject to more stringent regulations. The US’s approach can generously be called “fragmented”–so far, regulation been left to individual states, and some have passed laws on topics like data protection and facial recognition. However, there have been legislative proposals in Congress, and Senate Majority Leader Chuck Schumer is leading a new regulatory effort, so its approach is still evolving. Based on proposals and hearings, it sounds like the US is leaning more towards a broad, risk-based act like the EU.
(riproduzione riservata)