The future of AI policy in China
- Get link
- X
- Other Apps
Authors: Huw Roberts, University of Oxford and Emmie Hine, University of Bologna
Rapid developments in generative artificial intelligence (AI) — algorithms used to create new text, pictures, audio, or other types of content — are concerning regulators globally. These systems are often trained on personal and copyrighted data scraped from the internet, leading to privacy and intellectual property fears. They can also be used to generate harmful misinformation and disinformation.
On 15 August 2023, a new Chinese law designed to regulate generative AI came into force. This law, the latest in a series of regulations targeting different aspects of AI, is internationally groundbreaking as the first law that specifically targets generative AI. It introduces new restrictions for companies providing these services to consumers regarding both the training data used and the outputs produced.
Despite these new restrictions on companies, the evolution of the draft text, combined with changes in the wider tech policy context, could mistakenly be taken to indicate that China is starting to relax its drive towards strong regulatory oversight of AI.
Commentators have been quick to observe that the final generative AI regulation is significantly watered down compared to an earlier draft published for comment. Requirements to act within a three-month period to rectify illegal content and to ensure that all training data and outputs are ‘truthful and accurate’ were removed. It also clarified that these rules only apply to public-facing generative AI systems. A new provision specifying that development and innovation should be weighted equally with the security and governance of systems was also added.
Regarding the wider tech policy context, since late 2020, the Chinese government has utilised a variety of tools, including antitrust and data security enforcement. The government also undertook seemingly extra-legal measures that resulted in Jack Ma, co-founder of Alibaba, disappearing from the public eye after criticising regulators in what has commonly been referred to as a ‘tech crackdown’. But in line with the domestic economic troubles that China has been facing, the intensity of this crackdown appears to have eased and been replaced by an increased emphasis on domestic tech innovation.
While compelling, these pieces of evidence are red herrings for understanding the future of AI policy in China — a significant change in China’s approach to AI governance going forward is unlikely. It is correct that the generative AI regulations were watered down, yet it has not been uncommon for the text of draft AI regulations to change after a consultation period. For instance, explicit discrimination protections were removed from a draft AI regulation focused on recommender systems in 2021.
The weakening of the generative AI regulations was arguably more significant than for previous initiatives, yet ongoing work to ensure that AI is regulated effectively, including an early draft of what could potentially turn into a new, comprehensive AI law, is indicative of continued efforts to strengthen the country’s AI governance framework.
Similarly, the label ‘tech crackdown’ has been broadly applied to policies involving different government agencies, targets and justifications. While some policies — like the probes into technology companies — were largely reactionary and appear to have come to an end, establishing robust AI regulations has been a longer-term policy aspiration of the Chinese government that will likely continue. Together, these factors suggest that China is continuing to refine how it balances innovation and control in its approach to AI governance, rather than beginning a significant relaxation.
China’s pioneering efforts to introduce AI regulations and the legacy of reactive measures curtailing tech companies could cause a chilling effect that dampens industry outcomes in the short term. This challenge is exacerbated by the impacts of US semiconductor export controls on the Chinese AI sector, which have forced companies into workarounds as the most powerful chips become scarce. Though China has attempted to support its AI industry in several ways — such as through financing, providing access to compute and wider ministry reshuffles designed to promote domestic innovation — it is unclear how fruitful these initiatives will prove.
Notwithstanding the potential impact on China’s AI industry in the immediate term, introducing regulations designed to control AI is essential for addressing the risks from these technologies. These regulations and the practical tools they mandate mitigate harms to individuals and disruptions to social stability. For instance, requirements to watermark AI-generated content are essential for countering misinformation and disinformation.
By comparison, the laissez-faire approach taken by the United States leaves it ill-prepared to address these risks, something that could cause serious disruption in the forthcoming 2024 presidential election. AI governance tools also support China’s ambitions for global leadership in AI — for instance, through developing international standards that would provide them with a competitive edge.
China’s fundamental approach to AI governance is unlikely to shift significantly, even as it navigates ongoing economic turbulence. A firm regulatory approach may prove economically challenging in the short term but will be essential for mitigating harm to individuals, maintaining social stability and securing international regulatory leadership in the long term.
Huw Roberts is Doctor of Philosophy candidate at the Oxford Internet Institute, University of Oxford.
Emmie Hine is PhD candidate in the Department of Legal Studies at the University of Bologna.
The post The future of AI policy in China first appeared on East Asia Forum.from East Asia Forum
- Get link
- X
- Other Apps
Comments
Post a Comment