20221201日,巴西联邦参议院某委员会提交并公布了一份关于人工智能监管的研究报告和一份人工智能法草案,开启了巴西的人工智能立法之路。在这份908页的报告中,巴西联邦参议院委员会综述了巴西早期人工智能监管提案、经合组织成员国在人工智能监管(或计划)方面的成果,也汇总了政府在公开听证会中收集的意见。
《巴西人工智能法(草案)》的特点集中在如下13个主要方面:明确12大项原则、界定“人工智能系统”含义、明确人工智能系统的风险评估和评估结果公示要求、高风险人工智能系统的标准、禁止性人工智能系统范围、受人工智能影响着的个人权利、人工智能系统的治理和行为准则、人工智能提供者和使用者的民事责任(推定过错、严格责任)、严重安全事件的报告义务、版权的合理使用、强制执行等。具体如下:
(1) Principles 原则
The Draft AI Law says that the development, implementation and use of AI in Brazil must adhere to the principle of good faith, as well as (among others): self-determination and freedom of choice; transparency, explainability, intelligibility, traceability, and auditability (to avoid risks of both intentional and unintentional uses); human participation in (and supervision of) the “AI life cycle”; non-discrimination, justice, equity, and inclusion; legal process, contestability, and compensatory damages; reliability and robustness of AI and information security; and proportionality/efficacy when using AI.
《巴西人工智能法(草案)》表示,在巴西开发、部署和应用人工智能必须遵循诚信原则以及自主决策和自由选择、透明度、可解释、可理解、可溯源和可审核(以防止故意和非故意利用风险),“人工智能生命周期”的人类参与,非歧视、争议、平等和包容,程序法定、可竞争性和损害可救济,人工智能和信息安全的可依赖和鲁棒性,以及人工智能利用中的比例/功效原则。
(2) Definition of an “AI System”  “人工智能系统的定义
The Draft AI Law defines an “AI system” as a computational system with varying degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, via input data from machines or humans, with the goal of producing predictions, recommendations, or decisions that can influence the virtual or real environment. This definition aligns, at least partly, with the OECD definition of the same term, which other regimes have also adopted or drawn inspiration from when formulating their own AI legislative proposals. 
《巴西人工智能法(草案)》将 “人工智能系统”定义为具备不同程度自动性,被设计用于infer如何实现一个既定目标集合、利用基于机器学习和/或逻辑以及知识代理路径,通过从机器或人工输入数据,以产生能够影响虚拟或真实环境的预测、建议或决定为目标的计算机系统。该定义至少有部分和经济合作和发展组织(简称“经合组织”、OECD)对“人工智能系统”的界定一致,其他国家或地区在其人工智能立法中也采纳或借鉴了经合组织的定义。
(3) Risk Assessment 风险评估
Providers and users of AI systems must conduct and document a risk assessment prior to placingany AI system on the market.
人工智能系统的提供者和使用者在将任何人工智能系统投放到市场前必须实施风险评估并记录。
(4) High-Risk AI Systems 高风险人工智能系统
The Draft AI Law offers an enumerated list of “high-risk” AI systems, which include AI systems used in the following contexts: securing the operation of critical infrastructure; education and vocational training; recruiting; credit scoring; use of autonomous vehicles (if such use could cause physical harm to natural persons); and biometric identification. Notably, the Draft AI Law also classifies health applications (e.g., medical devices) as high-risk AI systems.  The competent authority (see “Enforcement” below) is responsible for periodically updating the list in accordance with a number of criteria set out in the Draft AI Law.
《巴西人工智能法(草案)》列举了“高风险” 人工智能,其中包括在下列场景中使用的人工智能系统:确保重要基础设施运营安全、教育和职业培训、信用评估、使用自动驾驶机动车(如果该使用可能对自然人造成身体伤害)以及生物识别。值得注意的是,《巴西人工智能法(草案)》还界定了作为高风险人工智能系统的医疗卫生应用(例如医疗设备)。有权机构(见下文“强制执行”)有义务根据草案所规定的标准定期更新清单。
(5) Public Database of High-Risk AI Systems 公开高风险人工智能系统数据库
The competent authority is also tasked with creating and maintaining a publicly accessible database of high-risk AI systems, which will contain (among other information) the completed risk assessments of providers and users of such systems. Such assessments will be protected under applicable intellectual property and trade secret laws.   
有权机构还应当建立和维护一个公众可访问的高风险人工智能数据库,该数据库应包括人工智能系统提供者和使用者已经完成的风险测评。该等测评结果受相关知识产权和商业秘密法律保护。
(6) Prohibited AI Systems 禁止性人工智能系统
Brazil’s Draft AI Law imposes a prohibition on AI systems that (i) deploy subliminal techniques, or (ii) exploit the vulnerabilities of specific groups of natural persons, whenever such techniques or exploitation is intended or has the effect of being harmful to the health or safety of the end user. Similarly, Brazil’s Draft AI Law also prohibits public authorities from conducting social scoring and the use of biometric identification systems in publicly accessible spaces, unless there is a specific law or court order that expressly authorizes the use of such systems (e.g., for the prosecution of crimes). 
《巴西人工智能法(草案)》明确规定禁止如下人工智能系统:(i)部署潜意识教化技术,或(ii)利用特定自然人群体缺陷,无论此类技术或利用是否以损害终端用户的健康或安全为目的或有此影响。与此类似,《巴西人工智能法(草案)》还禁止公权力机构实施社会评分和在公共场所使用生物识别系统,除非有特别法或法院命令明确授权使用此类系统(例如为侦查犯罪)。
(7) Rights of Individuals 个人的权利
The Draft AI Law grants persons affected by AI systems the following rightsvis-à-vis “providers” and “users” of AI systems, regardless of the risk-classification of the AI system:
·Right to information about their interactions with an AI system prior to using it – in particular, by making available information that discloses (among other things): the use of AI, including a description of its role, any human involvement, and the decision(s)/ recommendation(s)/ prediction(s) it is used for (and their consequences); identity of the provider of the AI system and governance measures adopted; categories of personal data used; and measures implemented to ensure security, non-discrimination, and reliability;
·Right to an explanation about a decision, recommendation, or prediction made by an AI system within 15 days of the request – in particular, information about the criteria and procedures used, and the main factors affecting the particular forecast or decision (e.g., rationale and logic of the system, how much it affected the decision made, and so forth);
·Right to challenge decisions or predictions of AI systems that produce legal effects or significantly impact the interests of the affected party;
·Right to human intervention in decisions made solely by AI systems, taking into account the context and the state of the art of technological development;
·Right to non-discriminationand the correction of discriminatory bias, particularly where it results from the use of sensitive personal data leading to (a) a disproportionate impact arising from protected personal characteristics, or (b) disadvantages/ vulnerabilities for people belonging to a specific group, even when apparently neutral criteria are used; and
·Right to privacy and the protection of personal data, in accordance with the Brazilian General Data Protection Law (“LGPD”).
《巴西人工智能法(草案)》确认受人工智能系统影响的人相对于人工智能系统的提供者使用者而言享有以如下权利:
·知情权。在人工智能系统被使用之前,有权获取有关其与人工智能系统交互的信息,特别是通过提供披露的信息(其他事项除外):人工智能的使用,包括对其角色的描述、任何人类参与,以及使用人工智能的决策/建议/预测(及其后果);人工智能系统提供者的身份和所采取的治理措施;使用的个人数据类别;以及为确保安全、非歧视和可靠性而采取的措施。
·解释权。有权要求在收到请求后15天内对人工智能系统做出的决定、建议或预测作出解释,特别是关于所使用的标准和程序的信息,以及影响特定预测或决策的主要因素(例如系统的原理和逻辑,对所做决定的影响程度等)。
·质询权。有权质疑人工智能系统所做出的、会产生法律效力或对受影响群体利益产生重要影响的决定或预测。
·人工干预权。考虑到技术开发的背景和现状,有权干预仅由人工智能系统做出的决策。
·免受歧视和纠正歧视性偏见的权利。有权反对歧视并纠正歧视性偏见,特别是在使用个人敏感数据导致:(a)对受保护的个人特征造成不成比例的影响,或(b)特定群体的人处于劣势/脆弱,即使使用了明显中立的标准。
·隐私权和个人数据受保护的权利。根据《巴西通用数据保护法》。
(8) Governance and Codes of Conduct 治理和行为准则
Providers and users of all AI systems must establish governance structures, and internal processes capable of ensuring security of such systems and facilitating the rights of affected individuals, including (among others) testing and privacy-by-design measures.
所有人工智能系统的提供者和使用者必须建立治理结构和内部流程,以确保此类系统的安全,并保护受影响个人的权利,包括(除其他外)测试和设计隐私保护措施。
Providers and users of “high-risk” AI systems must implement heightened measures, such as: conducting an algorithmic impact assessment that must be made publicly available, which may need to be periodically repeated; designating a team to ensure the AI system is informed by diverse viewpoints; implementing technical measures to assist with explainability.
“高风险”人工智能系统的提供者和使用者必须采取安全强化措施,例如:进行算法影响评估,评估结果必须公开并且可能需要定期重复进行;指定一个团队确保人工智能系统接收到不同意见;采取技术措施协助解释。
Further, providers and users of AI systems may also draw up codes of conduct and governance to support the practical implementation of the Draft AI Law’s requirements.
此外,人工智能系统的提供者和使用者还须制定行为和治理准则,以支持《巴西人工智能法(草案)》要求的具体落实。
(9) Serious Security Incidents 严重安全事件
Providers and users of AI systems must notify the competent authority of the occurrence of serious security incidents, including where there is risk to human life or the physical integrity of people, interruption of critical infrastructure operations, serious damage to property or the environment, as well as any other serious violations of fundamental human rights.
人工智能系统的提供者和使用者必须向主管部门通报发生的严重安全事件,包括危及生命安全或身体健康、关键基础设施运营中断、财产或环境严重受损以及任何其他严重侵犯基本人权的情况。
(10) Civil liability 民事责任
Providers and users of AI systems are responsible for the damage(s) caused by the AI system, regardless of the degree of autonomy of the system. Further, providers and users of “high-risk” AI system are strictly liable to the extent of their participation in the damage, and their fault in causing the damage is presumed.
无论人工智能系统的自主程度如何,该系统的提供者和使用者都应对人工智能系统造成的损害承担责任。此外,高风险人工智能系统的提供者和使用者根据其参与损害的程度承担严格责任,并推定其对造成的损害存在过错。
(11) Copyright版权
The automated use of existing works – such as their extraction, reproduction, storage and transformation in data and text-mining processes – by AI systems for activities carried out by research organizations and institutions, journalists, museums, archives and libraries, will not necessarily constitute a copyright infringement under certain scenarios listed in the Draft AI Law.
根据《巴西人工智能法(草案)》所列举的特定情况,研究组织和机构、记者、博物馆、档案馆和图书馆开展活动利用人工智能系统自动使用现有作品(如数据和文本挖掘过程中的提取、复制、存储和转换),不一定构成版权侵权。
(12) Sandboxes沙盒
The Draft AI Law provides that the competent authority may regulate testing environments to support the development of innovative AI systems.
《巴西人工智能法(草案)》规定,主管机关可以对测试环境进行监管,以支持人工智能系统的创新开发。
(13) Enforcement 强制执行
The Brazilian Government must designate a competent authority to oversee the implementation and enforcement of the Draft AI Law.Depending on the violation, administrative fines may be imposed of up to 50 million Reais (approximately 9 million Euros) or 2% of a company’s turnover.
巴西政府必须指定一个主管机关来监督《巴西人工智能法(草案)》的实施和执行。根据受监管者违法的情况,可处以最高5000万雷亚尔(约900万欧元)或公司营业额2%的行政罚款。
Next steps 下一步工作计划
The Senate will use the Draft AI Law as a basis for drafting and approving a bill, which will then be discussed in the Chamber of Deputies.
巴西参议院将以《巴西人工智能法(草案)》为基础,起草和批准一份法案提交众议院讨论。
参考文献:
Evangelos Sakiotis, Anna Oberschelp de Meneses, Nicholas Shepherd & Kristof Van Quathem, Brazil’s Senate Committee Publishes AI Report and Draft AI Law, January 27, 2023, https://www.insideprivacy.com/artificial-intelligence/brazils-senate-committee-publishes-ai-report-and-draft-ai-law/.
来源:
数字治理全球洞察
继续阅读
阅读原文