WHY THIS MATTERS IN BRIEF
China is often the fastest to roll out new regulations to regulate new technologies, but that’s good and bad …
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Back in April, there was a major development in the Artificial Intelligence (AI) space in China. Way ahead of their other international peers the Chinese internet regulator published a draft regulation on Generative AI. Named Measures for the Management of Generative Artificial Intelligence Services, the document doesn’t call out any specific company but the way it is worded makes it clear that it was inspired by the serial launches of Large Language Model (LLM) chatbots like OpenAI’s ChatGPT and Google’s BARD in China and the US.
Furthermore, and most interestingly, China’s approach to regulating these powerful new AI’s offers a crucial window into the Chinese Communist Party’s (CCP) fears and opinions on the latest tech craze to hit the world.
Many of the clauses in the draft regulation are principles that AI critics are advocating for in the West: data used to train generative AI models shouldn’t infringe on intellectual property or privacy; algorithms shouldn’t discriminate against users on the basis of race, ethnicity, age, gender, and other attributes; AI companies should be transparent about how they obtained training data and how they hired humans to label the data.
At the same time, there are rules that other countries would likely balk at. The government is asking that people who use these generative AI tools register with their real identity – just as on any social platform in China. The content that AI software generates should also “reflect the core values of Socialism.”
Neither of these requirements is surprising. The Chinese government has regulated tech companies with a strong hand in recent years, punishing platforms for lax moderation and incorporating new products into the established censorship regime.
The document makes that regulatory tradition easy to see: there is frequent mention of other rules that have passed in China, on personal data, algorithms, deepfakes, cybersecurity, etc. In some ways, it feels as if these discrete documents are slowly forming a web of rules that help the government process new challenges in the tech era.
The fact that the Chinese government can react so quickly to a new tech phenomenon is a double-edged sword. The strength of this approach, which looks at every new tech trend separately, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The weakness is its piecemeal nature, with regulators forced to draw up new regulations for new applications or problems.”
If the government is busy playing whack-a-mole with new rules, it could miss the opportunity to think strategically about a long-term vision on AI. We can contrast this approach with that of the EU, which has been working on a “hugely ambitious” AI Act for years with a recent revision of the AI Act that I discussed a while ago now including regulations on generative AI.
Despite the restrictive nature of the document though it also seems to be a tacit encouragement for companies to keep working on AI. The maximum proposed fine set for violating the rules is 100,000 RMB – about $15,000, a minuscule number for any company that has the capacity to build such LLMs.
Of course, if a company is fined each time its AI model violates the rules, the amounts can pile up. But the size of the fine suggests that the rules are not made to scare the companies away from investing in AI. As Angela Zhang, a law professor at the University of Hong Kong, recently wrote, the government is playing multiple roles: “The Chinese government should not only be viewed as a regulator, but also as an advocate, sponsor, and investor in AI. Ministries championing AI development, along with state sponsors and investors, are poised to become a potent counterbalance against stringent AI regulation.”
It may take a few months still before regulators finalize the draft, and months after that before it goes into effect. But I know that many people, including me, will be keeping a close eye on any changes.
Who knows? By the time the regulation goes into effect, there could be another new viral AI product that compels the government to come up with yet more rules. You never know …