阅读标记说明:
加粗对应的是值得注意的单词; 下划线对应的是值得注意的短语;绿色字体对应的是值得注意的句式; 标记为绿色的部分是值得记住的句子; 标记为黄色的部分
是要小心留意的词汇或语法的使用。

说明: 本文选自10月28日最新《经济学人》, 上方扫码加微信即可领取


Think, then act思考,然后行动

Governments must not rush intopolicing AI
政府不应急于管制人工智能

Asummitin Britain willfocus on“extreme” risks. Butno one knows what they look like英国峰会将重点关注"极端 "风险。但没有人知道它们是什么样子的

WILL ARTIFICIAL intelligencekill us all? Some technologists sincerely believe the answer is yes. In one nightmarish scenario, AI eventually outsmarts humanity and goes rogue, taking overcomputers and factories and filling the sky with killer drones. In another, large language models (LLMs) of the sort that power generative AIs like ChatGPT give bad guys the know-how to create devastating cyberweapons and deadly new pathogens.
人工智能会杀死我们所有人吗?一些技术专家认为真地会这样。在一种噩梦般的场景中,人工智能最终会超越人类,变得不听话,接管计算机和工厂,让无人机杀手充斥天空。在另一种情况下,为ChatGPT 等生成式人工智能提供动力的大型语言模型(LLM)让坏人学会了如何制造毁灭性网络武器和致命新病原体。
It is time to think hard about these doomsday scenarios. Not because they have become more probable—no one knows how likely they are—but because policymakers around the world are mulling measuresto guard against them. The European Union is finalising an expansive AI act; the White House is expected soon to issue an executive orderaimed at LLMs; and on November 1st and 2nd the British government will convene world leaders and tech bosses for an “AI Safety Summit” to discuss the extreme risks that AI models may pose.
现在是认真思考这些末日场景的时候了。这并不是因为它们变得更有可能——没人知道它们的可能性有多大,而是因为全世界的政策制定者都在酝酿防范它们的措施。欧盟正在敲定一项广泛的人工智能法案;白宫预计很快将发布一项针对大型语言模型的行政命令;11月1日和2日,英国政府将召集世界各国领导人和科技大佬召开 "人工智能安全峰会",讨论人工智能模型可能带来的极端风险。
Governments cannot ignore a technology that could change the world profoundly, and any credible threat to humanity should be taken seriously.Regulators have been too slow in the past. Many wish they had acted faster to police social media in the 2010s, and are keen to be on the front footthis time. But there is danger, too, in acting hastily. If they go too fast, policymakers could create global rules and institutions that are aimed at the wrong problems, are ineffective against the real ones and which stifle innovation.
各国政府不能忽视这项可能深刻改变世界的技术,任何对人类的可信威胁都应得到认真对待。监管机构过去的行动过于迟缓。许多人都希望他们能在2010年之际加快行动,监管社交媒体。但仓促行事也有危险。如果操之过急,政策制定者可能会制定出针对错误问题的全球规则和制度,对真正的问题束手无策,并扼杀创新。
The idea that AI could drive humanity to extinctionis still entirely speculative. No one yet knows how such a threat might materialise. No common methods exist to establish what counts as risky, much lessto evaluate models against a benchmark for danger. Plenty of research needs to be done before standards and rules can be set. This is why a growing number of tech executives say the world needs a body to study AImuch like the Intergovernmental Panel on Climate Change (IPCC), which tracks and explains global warming.
人工智能可能导致人类灭绝的观点仍完全是推测。目前还没有人知道这种威胁会如何发生。目前还没有通用的方法来确定什么是风险,更不用说根据危险基准来评估模型了。在制定标准和规则之前,还需要进行大量的研究。这就是为什么越来越多的技术高管表示,世界需要一个研究人工智能的机构,就像跟踪和解释全球变暖的政府间气候变化专门委员会(IPCC)一样。
A rush to regulate away tail riskscould distract policymakers from less apocalyptic but more pressing problems. New laws may be needed to govern the use of copyrighted materials when training LLMs, or to define privacy rightsas models guzzle personal data. And AI will make it much easier to produce disinformation, a thorny problem for every society.
如果急于对尾部风险进行监管,可能会分散决策者的注意力,使其无法关注不那么具有世界末日性质但却更为紧迫的问题。在训练大型语言模型时,可能需要新的法律来规范版权材料的使用,或者在这些模型吞噬个人数据时界定隐私权。人工智能将使制造虚假信息变得更加容易,这对每个社会来说都是一个棘手的问题。
Hasty regulation could also stifle competition and innovation. Because of the computing resourcesand technical skillsrequired, onlya handful of companies have so far developed powerful “frontier” models. New regulation could easily entrench the incumbents andblock out competitors, not least because the biggest model-makers are working closely with governments on writing the rule book. A focus on extreme risks is likely to make regulators wary of open-source models, which are freely availableand can easily be modified; until recently the White House was rumoured to be considering banning firms from releasing frontier open-source models. Yet if those risks do not materialise, restraining open-source models would serve only to limit an important source of competition.
仓促监管还可能扼杀竞争和创新。由于需要计算资源和技术技能,迄今为止只有少数几家公司开发出了功能强大的"前沿 "模型。新的监管条例很容易巩固现有的监管机构,并将竞争者拒之门外,这主要是因为最大的模型制造商正与政府密切合作,共同编写监管规则。对极端风险的关注可能会让监管者对开源模型产生戒心,因为开源模型可以免费获取,而且很容易修改;直到最近,白宫还在传言考虑禁止企业发布前沿开源模型。然而,如果这些风险没有出现,限制开源模型只会限制重要的竞争来源。
Regulators must be prepared to react quickly if needed, but should not be rushed into setting rulesor building institutions that turn out to be unnecessary or harmful. Too little is known about the direction of generative AI to understand the risks associated with it, let alonemanage them.
监管机构必须做好准备,在必要时迅速做出反应,但不应匆忙制定规则或建立制度,因为这些规则或制度最终会被证明是不必要的或有害的。人们对人工智能的发展方向知之甚少,无法了解与之相关的风险,更不用说管理这些风险了。
The best that governments can do now is to set upthe infrastructure to study the technology and its potential perils, and ensure that those working onthe problem have adequate resources. In today’s fractious world, it will be hard to establish an IPCC-like body, and for it to thrive. But bodies that already work on AI-related questions, such as the OECD and Britain’s newish Frontier AI Taskforce, which aims to gain access to models’ nuts and bolts, could work closely together.
政府现在能做的最好的事情就是建立基础设施来研究这项技术及其潜在的危险,并确保那些致力于解决这个问题的人拥有足够的资源。在当今这个纷争不断的世界上,要建立一个类似于IPCC 的机构并使其茁壮成长是很难的。不过,那些已经在研究人工智能相关问题的机构,如经合组织和英国新成立的旨在获取模型具体细节的"人工智能前沿工作组"(Frontier AI Taskforce),可以密切合作。
It would help if governments agreed to a code of conductfor model-makers, much like the “voluntary commitments” negotiated by the White House and to which 15 makers of proprietary models have already signed up. These oblige model-makers, among other things, to share information about how they are managing AI risk. Though the commitments are not binding, they may help avoid a dangerous free-for-all. Makers of open-source models, too, should be urged to join up.
如果各国政府能就模型制作者的行为准则达成一致,这将会有所帮助,就像白宫谈判达成的"自愿承诺 "一样,目前已有 15 家专有模型制作者签署了该承诺。除其他事项外,这些承诺还要求模型制造商分享他们如何管理人工智能风险的信息。尽管这些承诺并不具有约束力,但它们可能有助于避免危险的自由竞争。我们也应敦促开源模型的制造商加入进来。
As AI develops further, regulators will have a far better idea ofwhat risks they are guarding against, and consequently what the rule bookshould look like. A fully fledged regimecould eventually look rather like those for other technologies of world-changing import, such as nuclear power or bioengineering. But creating it will take time—and deliberation. ■
随着人工智能的进一步发展,监管机构将更清楚地知道他们要防范哪些风险,从而知道规则应该是什么样的。一个成熟的制度最终可能与其他改变世界的技术(如核能或生物工程)的制度类似。但是,建立这种制度需要时间,也需要深思熟虑。■

继续阅读
阅读原文