作者:袁立志 朱垒
On July 13, 2023, the Cyberspace Administration of China("CAC") and several ministries and commissions issued the Interim Measures on Generative AI Services (the "Interim Measures"). 
This is the first dedicated legislation in the field of generative AI in China, which establishes basic standards for generative AI services. 
1
Overall Comments on the Interim Measures
01
Industrial Development and Security Assurance
Compared with the Measures for the Administration of Generative AI Services (Exposure Draft) ("Draft for Comments"), the Interim Measures emphasize the governance principle of "attaching equal importance to development and security", encourage the innovative development of generative AI technologies and applications, and achieve a balance between regulatory governance and scientific and technological innovation. For example, the legislative basis of the Interim Measures has been supplemented with the Law of the People's Republic of China on Science and Technology Advancement, and some clauses on the promotion of industrial development have been added to the body of the Interim Measures, including provisions on supporting basic technology innovation, exploration of application scenarios, construction of ecological systems, and construction of data resources.
In addition, it is worth noting that the Interim Measures are also signed and issued by the National Development and Reform Commission ("NDRC"). The NDRC is responsible for coordinating industrial development. Taken together with the establishment of the National Data Bureau in early 2023, it can be observed that the Central Government's regulatory approach of balancing development and security in the field of data production elements has extended to the field of AI, thereby releasing a positive signal of promoting the development of the AI industry.
02
Layer-by-Layer Approach of Law-making and Regulatory Innovation
The Interim Measures have typical characteristic of layer-by-layer approach of law-making, with most of its articles serving as the implementation, application and restatement of existing laws in the field of generative AI, such as content security management requirements, legal basis for the processing of personal information and rights responses, data security, application security, algorithmic security assessment and algorithmic record-filing. Even if there are no Interim Measures, generative AI service providers are still subject to relevant legal obligations in accordance with the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, the Provision on the Ecological Governance of Network Information Content, the Provision on the Management of Algorithm Recommendation for Internet Information Services, and the Provision on the Deep Synthesis Management of Internet Information Services.
The Interim Measures have also raised several new regulatory requirements for generative AI, such as model training data quality requirements, data annotation requirements, service descriptions, generated content identification, etc. Compared with the Draft for Comments released in April 2023, the expressions in these provisions of the Interim Measures are more in line with the characteristics of generative AI (e.g. requirements on the authenticity and accuracy of the generated content) and are more flexible (e.g. time requirements for optimizing the model when the generated content is found to be illegal).
03
Special Legislation and Future Unified Legislation
The Interim Measures is a special legislation for generative AI, based on solving the urgent problems caused by the current R&D and application of large AI models. Subject to practical experience, prep time, layer-by-layer approach and other factors, the Interim Measures mainly continue the existing legal framework and make several regulatory innovations on this basis. As for the broader issue of unified regulation of AI, it cannot be solved by the Interim Measures and could only be left for subsequent legislation.
We noticed that according to the Circular of the State Council on the Annual Legislation Plan for 2023 issued by the General Office of the State Council in early June 2023, the Law on Artificial Intelligence has been included in the 2023 Legislative Plan and relevant drafts will be submitted to the NPC Standing Committee for deliberation. This means that the China’s AI unified supervision legislation has been put on the agenda. Compared with the current low-level legislation and layer-by-layer approach, the unified law on AI holds greater potential for achieving breakthroughs in respect of systems and institutions construction, and providing better legal support for the development and security of the AI industry.
2
Interpretation of Key Clauses of the Interim Measures
01
Generative AI and Deep Synthesis
The object regulated by the Interim Measures is generative AI. So, what is Generative AI? What is the difference between it and the terms deep synthesis, synthesis, etc. used in the relevant laws previously?
According to Article 22, paragraph 1, of the Interim Measures, generative AI technologies refer to models and related technologies that can generate content such as text, images, audio, and video, and are applied in many service scenarios such as media, film and television, office, social networking, e-commerce, etc. Representative products include Chat GPT, Midjourney, Stable Diffusion, Ali Tongyi Qianwen, iFly TEK Spark Desk, etc.
Deep synthesis technology refers to the technology of producing network information such as text, image, audio, video, and virtual scene by using generative synthesis algorithms such as deep learning and virtual reality, such as text conversion, face generation, image restoration, digital human, etc. Typical application scenarios include AI face replacement, one-button beauty, AI voice changer, AI video generation, and virtual digital avatar synthesis.
We understand that there is an overlap between generative AI and deep synthesis, and the former is wider than the latter. Compared with generative AI, deep synthesis emphasizes more deeply creative processing, such as picture restoration, text-to-speech, and immersive scene simulation, and the output result is more creative or falsifiable, so the risk of being abused is higher, such as being used for fraud, misleading public opinion, etc. Generative AI and deep synthesis technology can be collectively referred to as synthesis and generative AI technology.
02
Scope of Application and Exemption
According to Articles 2 and 20 of the Interim Measures, the Interim Measures apply to services that provide content such as text, image, audio, and video to the public within the mainland territory of PRC. The Interim Measures explicitly exclude R&D and application activities that are not provided to the public within the territory of China from the scope of application. Therefore, the key point for the application of the Interim Measures is "providing services to the public within the territory of PRC", regardless of whether the provider is an organization or individual within the territory of PRC, and whether or whom the provider charges for the service.
Example 1: Company A develops a large language model on its own, applies it to the App it operates, and provides services to the end users in the Chinese Mainland. This is a typical service provided to the public within the mainland territory of PRC, and the Interim Measures shall apply to Company A.
Example 2: Company A develops a large language model on its own, which is still in the R&D stage, and only invites a few users for testing, or only uses it for internal operation and management, but does not launch it into the market to provide services to the public. In this scenario the Interim Measures shall not apply to Company A.
Example 3: Company A develops a large language model on its own, provides it to Company B with API, and charges technical service fees. Company B, with the capabilities of the large language model, provides services to end users within the mainland territory of PRC. In this scenario, the Interim Measures shall apply to both Company A and B. If Company B only provides services to overseas end users, the Interim Measures shall not apply to Company B.
Example 4: Company A provides basic cloud services to Company B, and Company B trains a large language model based on the cloud service and provides services to end users within the mainland territory of PRC. In this case, the Interim Measures shall not apply to Company A, but shall apply to Company B.
The Interim Measures is different from the Provision on Deep Synthesis Management of Internet Information Services. The Interim Measures does not distinguish between service providers and technical supporters. In other words, whether they provide services directly to end users, or provide services to businesses through API or other methods, both constitute the “provider” under the Interim Measures.
03
Authenticity and Accuracy of Production Content
In the Draft for Comments released in April 2023, the competent authorities required that "the content generated by AI shall be authentic and accurate." At that time, we commented that the models do not answer queries by retrieving or accessing data in a database or on the web. They predict answers based, in large part, on the likelihood of words appearing in connection with one another (Markov chain). From the human perspective, there is a certain degree of "creativity" (which is just the value of generative AI), and therefore there is no guarantee of the authenticity and accuracy of the content to be generated. We suggested that on this issue, it is appropriate to set requirements for behavior, not results, otherwise it is not conducive to industrial development.
Many experts made similar comments at the time. The competent authorities followed the advice. Article 4 of the Interim Measures adjusted the statement to "based on the characteristics of the types of services, effective measures shall be taken to promote the transparency of generative AI services and to improve the accuracy and reliability of the content generated". This is only to require appropriate action to promote the accuracy and reliability as much as possible, but not to ensure the accuracy and reliability of the content generated. This adjustment is of positive significance to the development of the industry.
04
Content Producers Responsibility
In the Internet era, the law adopts a "dichotomy" approach to governance of network content governance, which distinguishes between the platform and the content producers. The producers account for security of the content produced and published by themselves, and the platform takes the responsibility for the management and control of the information dissemination within the platform.
The appearance of generative AI challenges the above regulatory approach. Unlike the Internet era, where humans use technical tools to edit and publish content, AI Generated Content ("AIGC") is completed by both humans and machines, humans input instructions or corpora, and machines generate content. Machines are substantially involved in content "creation", and their contribution to the "creativity" of AIGC is even greater than that of the natural persons who use it. They are no longer pure technical tools, and with the continuous iteration of large models, the degree of contribution of machines may become higher and that of users becomes lower and lower. In this case, if the provider of generative AI is only defined as the information publishing platform, and the user as the content producer, it is not only inconsistent with the fact but also goes against the governance of content security. But it seems difficult to precisely measure the contribution of man-machine and accordingly precisely allocate the responsibilities of the two. We understand that it is based on these considerations that Article 9 of the Interim Measures directly requires the provider to bear the responsibility of the content producer.
Although the Interim Measures do not define users as content producers, they should still abide by the content security requirements in accordance with the Cybersecurity Law, the Provisions on the Ecological Management of Network Information Content, the Interim Measures, and other provisions. For example, they are not allowed to input illegal instructions, guide AI to generate illegal content or publish content infringing others' rights and interests. Otherwise, they will be subject to punishment such as a warning, blocking of their accounts, and suspension of services.
05
Construction of Training Data Resources
Articles 5 and 6 of the Interim Measures encourage the construction of data resources, promote the construction of public training data resources platform, speed up the orderly disclosure of classified and graded public data, and expand high-quality public training data resources. The construction of data resources is extremely important for the current rapid development of the AI industry.
The pre-training of large models require huge amount of data. In order to optimize the iterative models, it is necessary to continuously feed data for refinement training. At present, the lagging data resource production, the intractability of data source compliance issues, and the isolation of data resources among various enterprises have seriously hindered the development of the generative AI.
According to Article 7 of the Interim Measures, the data of generative AI pretraining and refinement training shall meet a series of legal requirements with legitimate sources and shall not infringe upon any intellectual property right or personal information equity right. However, it is quite challenging to meet or guarantee the legality of training data. At present, the data used by enterprises in training large models include stock data, commercial public data, public Internet data, user behavior data, etc. These data come from different sources, so the compliance requirements are also different. If it is to meet full compliance, it will entail huge costs.
The orderly disclosure of public data and the construction of public training data resources help alleviate the problem of tight data resources and reduce the compliance costs of enterprises to obtain data.
06
Security Assessment and Algorithm Record-filing
According to Article 17 of the Interim Measures, in combination with the Provision on the Management of Recommendation of Algorithms for Internet-based Information Services, the Provision on the Security Assessment of Internet-based Information Services with Attribute of Public Opinions or Capable of Social Mobilization and the Provision on the Deep Synthesis Management of Internet Information Services, the  provider of generative AI services with the attribute of public opinions or are capable of social mobilization shall conduct security assessment and perform the algorithm record-filing formalities. Security assessment and algorithm record-filing is at present the main regulatory measures in AI sector.
Previously, given the limited law enforcement efforts related to security assessment and algorithm record-filing, enterprises may carry out related work in accordance at their own discretion and there is no case of punishment imposed due to algorithm record-filing or security assessment. With the deepening of regulation on generative AI, some Apps involving AIGC have been removed from the application store one after another this year. Some large models at the testing stage hastate to go ahead to provided services to the public. Some App operators required by application stores to provide supporting materials such as algorithmic security assessment reports and algorithm record-filing certificates, or their products are not allowed to be put on the stores. Therefore, it is necessary for enterprises involving generative AI to conduct security assessment and algorithm record-filing.
As far as the record-filing of algorithms is concerned, generative AI shall be generally filed for record with the Internet Information Service Algorithm Record-filing Platform (beian.cac.gov.cn). During the process of algorithm record-filing, an enterprise shall explain the algorithm attributes such as scenarios of use, input and output data, algorithm models, algorithm strategies, and mechanism, and shall conduct self-assessment of the security risks of the algorithm and submit a self-assessment report. After the algorithm has been filed for record, the CAC will release the enterprise's algorithm record-filing information, but will only publicize the name, role, subject, application products, usage and record-filing number of the algorithm, and will not publicize the detailed mechanism of the algorithm.
Generative AI with the attributes of public opinion formation or social mobilization capacity will not be available to the public without a security assessment. The security assessment is not the security assessment in the process of algorithm record-filing, but the security assessment of new technologies and new applications carried out according to the "Provision on Security Assessment of Internet Information Services with Attribute of Public Opinion or Ability of Social Mobilization", that is, "Large Model Double-New-Assessment" in practice. This is also different from the traditional double-new-assessment, such as the assessment department, the material submission process, the assessment criteria, etc. According to the limited information we have, there is no large model that has passed the security assessment.
Based on our customer service experience, security assessment and algorithm record-filing require a long period, large workload, and extremely fine granularity of the relevant materials, and thus it is recommended that enterprises start as soon as possible.
3
Generative AI services compliance checklist
In accordance with the Interim Measures, combined with the relevant legal Provision, we work out a generative AI services compliance checklist for relevant businesses for reference.
有关《生成式人工智能服务合规检查表》,请点击“阅读原文”进行查看下载。
For the Generative AI Service Compliance Checklist, please click Read Original to view and download.
前沿科技法律观察专栏往期文章  
作者介绍
袁立志律师先后从上海对外经贸大学和新加坡国立大学取得国际法硕士和国际商法硕士学位。袁律师于2016年加入竞天公诚律师事务所。
袁律师是注册信息隐私专业人员(CIPP/E)、注册信息隐私管理人员(CIPM)、注册信息安全专业人员(CISP),先后参与多项信息安全和大数据标准的编制。
袁律师兼任华东政法大学数字法治研究院特聘研究员,华东师范大学法学院校外实务导师。
袁律师的执业领域为网络与数据法、TMT、金融科技、前沿科技法律事务。袁律师曾为众多国内外知名企业提供科技法律服务,承办了一系列前沿的、富有挑战性的项目,积累了丰富的实践经验,是该领域的知名专家 。
袁律师连续荣获The Legal 500亚太地区TMT(电信、媒体与科技)领域和数据保护领域“特别推荐律师”;LEGALBAND中国顶级律师排行榜“网络安全与数据”第一梯队,中国律师特别推荐榜15强:“网络安全与数据合规”,中国律师特别推荐榜15强:“人工智能和高科技”等奖项。
袁立志律师历史文章 

继续阅读
阅读原文