阅读

2020考研英语冲刺押题:阅读理解预测(三)

来源:跨考2019-11-27

  众所周知,在考研英语中,得阅读者,得英语,考研英语阅读是分值最高的一考题,所以考生需要掌握大量的词汇,同时要有丰富的阅读经验,尤其是外刊杂志。所以今天给大家整理了来自于英美国家的报刊杂志文章,进行2020考研英语阅读预测,希望对大家有帮助。

  《卫报》,是考研英语阅读四大话题之一的“文化历史教育类”。请大家仔细阅读文章,学会快速抓住文章大意和段落大意,区分细节和关键信息句,以此来提高阅读技能。

  原文期刊:卫报

  原文标题:The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster

  本文来自2018年6月12日 《卫报》的 一篇文章。本文主要针对事件“Google将AI应用于军事领域,引发员工强烈抗议”,探讨AI伦理道德问题,指出其本质在于创造、使用AI的人。

  行文脉络:引出话题(第一段)——提出新事件(第二段)——评述事件(第三段)——表明观点(第四段)。

  PART 1

  原文

  I ①Frankenstein’s monster *haunts discussions of the ethics of artificial intelligence: the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings. ②This is a misleading picture because it suggests that there will be a moment at which the monster comes alive: the switch is thrown, the program run, and after that its human creators can do nothing more. ③In real life there will be no such *singularity. ④Construction of AI and its *deployment will be continuous processes, with humans involved and to some extent responsible at every step.

  II ①This is what makes Google’s declarations of ethical principles for its use of AI so significant, because it seems to be the result of a *revolt among the company’s programmers. ②The senior management at Google saw the supply of AI to the Pentagon as a goldmine, if only it could be kept from public knowledge. ③'Avoid at all costs any mention or implication of AI,' wrote Google Cloud's chief scientist for AI in a *memo. ④'I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.'

  III ①That, of course, is exactly what the company had been doing. ②Google had been subcontracting for the Pentagon on Project Maven, which was meant to bring the benefits of AI to war-fighting. ③Then the media found out and more than 3,000 of its own employees protested. ④Only two things frighten the tech giants: one is the stock market; the other is an organised workforce. ⑤The employees’ agitation led to Google announcing six principles of ethical AI, among them that it will not make weapons systems, or technologies whose purpose, or use in *surveillance, violates international principles of human rights. ⑥This still leaves a huge intentional exception: profiting from “*non-lethal” defence technology.

  IV ①Obviously we cannot expect all companies, still less all programmers, to show this kind of ethical *fine-tuning. ②Other companies will bid for Pentagon business in the US: Google had to beat IBM, Amazon and Microsoft to gain the Maven contract. ③But in all these cases, the companies involved – which means the people who work for them – will be actively involved in maintaining, *tweaking and improving the work. ④This opens an opportunity for consistent ethical pressure and for the attribution of responsibility to human beings and not to *inanimate objects. ⑤Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. ⑥It is not the monster, but the good Dr Frankenstein we need to worry about most.

  PART 2

  词汇短语

  1.*haunt [hɔ:nt] v. 常出现于;(长期)困扰

  2.ethics ['eθɪks] n. 伦理学;伦理观;道德标准

  3.artificial intelligence 人工智能(简称AI)

  4.at the expense of 以……为代价

  5.misleading [mɪs'li:dɪŋ] a. 令人误解的

  6.*singularity [sɪŋgjʊ'lærɪtɪ] n. 稀有;异常,奇怪

  7.*deployment [di:'plɒɪmənt] n. 调度,部署

  8.*revolt [rɪ'vəʊlt] n. 反抗;叛乱

  9.goldmine ['gəuldmain] n. 金矿;大财源;宝库

  10.implication [ɪmplɪ'keɪʃ(ə)n] n. 暗示

  11.*memo ['meməʊ] n. 备忘录

  12.subcontract [sʌbkən'trækt] v. 转包,分包

  13.workforce ['wɜ:kfɔ:s] n. 全体员工

  14.agitation [ædʒɪ'teɪʃən] n. 鼓动、煽动

  15.*surveillance [sə'veɪləns] n. 监督;监视

  16.*lethal ['li:θəl] a. 致命的,致死的

  17.*non-lethal a. 非致死的

  18.*fine-tuning ['fain'tju:niŋ] n. 调整

  19.bid for 投标,出价

  20.*tweak [twi:k] v. 稍稍调整(机器、系统等)

  21.attribution [,ætrɪ'bju:ʃən] n. 归属

  22.*inanimate [ɪn'ænɪmət] a. 无生命的,无生气的

  (注:标*号为超纲词)

  PART 3

  翻译点评

  I ①Frankenstein’s monster *haunts discussions of the ethics of artificial intelligence: the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings. ②This is a misleading picture because it suggests that there will be a moment at which the monster comes alive: the switch is thrown, the program run, and after that its human creators can do nothing more. ③In real life there will be no such *singularity. ④Construction of AI and its *deployment will be continuous processes, with humans involved and to some extent responsible at every step.

  翻译:在讨论AI伦理时,经常会提到弗兰肯斯坦((博士创造出来)的怪物:人们担心科学家会创造出有目的甚至有欲望的东西,不惜以牺牲人类为代价来达到自身目的。这种想法极具误导性,因为它暗示着在某个时刻,怪物会活过来:扔掉开关,运行程序,之后人类创造者再也无能为力。(然而)在现实生活中,是不会出现这种天方夜谭的。AI的开发和使用将是一个持续的过程,人类参与其中,并在一定程度上对每一步骤负责。

  点评:I段引出全文话题:AI技术的安全问题。①句先从“科学怪物”引入:AI会不会像科学怪人一样失控,威胁人类?②③句回答,②句回答:不会;③句解释原因:AI的开发和使用是一个持续的过程,人类参与整个过程,且对每一步骤负责。

  II ①This is what makes Google’s declarations of ethical principles for its use of AI so significant, because it seems to be the result of a *revolt among the company’s programmers. ②The senior management at Google saw the supply of AI to the Pentagon as a goldmine, if only it could be kept from public knowledge. ③'Avoid at all costs any mention or implication of AI,' wrote Google Cloud's chief scientist for AI in a *memo. ④'I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.'

  翻译:这使得Google宣布AI使用的道德原则显得至关重要,因为这似乎是公司内部程序员反抗的结果。Google管理高层(认为),只要能够瞒得住公众,为五角大楼提供AI将是一座金矿。“要不惜一切代价避免任何提及或暗示AI(的话语),”Google云AI首席科学家在备忘录中写道,“我不知道如果媒体开始挑起话题,大做文章,说Google为国防工业秘密制造AI武器、研发AI技术,将会发生什么”。

  点评:II段提出事件:Google将AI用于军事领域,引发员工抗议,继而提出AI使用的道德原则。①句开门见山,提出事件并交代缘由;②句解释Google高层将AI用于军事原因;③④句指出Google的担忧,暗含该事件可能造成的后果,呼应①句提出AI的道德原则。

  III ①That, of course, is exactly what the company had been doing. ②Google had been subcontracting for the Pentagon on Project Maven, which was meant to bring the benefits of AI to war-fighting. ③Then the media found out and more than 3,000 of its own employees protested. ④Only two things frighten the tech giants: one is the stock market; the other is an organised workforce. ⑤The employees’ agitation led to Google announcing six principles of ethical AI, among them that it will not make weapons systems, or technologies whose purpose, or use in *surveillance, violates international principles of human rights. ⑥This still leaves a huge intentional exception: profiting from “*non-lethal” defence technology.

  翻译:当然,这正是Google一直在做的事情。一直以来,Google都在为五角大楼的“Maven项目”转包合同,意在将AI的成果用于实战。随后,(事情败露,被)媒体发现,3000多名员工集体抗议。(而真正)吓到科技巨头的只有两件事:一是股票市场;二是有组织的劳动力。员工的联合抗议致使Google宣布六项AI道德原则,其中包括绝不制造武器系统,绝不开发用于监视的技术,违反国际人权原则。(不过,)这仍故意遗漏了重要问题,即从“非致死性”防御技术中获益。

  点评:III段评述事件:Google提出AI使用的道德原则另有原因,且遗漏重要问题。①②句具体解释事件;③句点明事情败露后果,呼应II段;④-⑥句评述Google做法:提出AI使用道德原则;④⑤句指出Google提出该道德原则另有原因;⑥句评述AI使用道德原则:遗漏重大问题。

  IV ①Obviously we cannot expect all companies, still less all programmers, to show this kind of ethical *fine-tuning. ②Other companies will bid for Pentagon business in the US: Google had to beat IBM, Amazon and Microsoft to gain the Maven contract. ③But in all these cases, the companies involved – which means the people who work for them – will be actively involved in maintaining, *tweaking and improving the work. ④This opens an opportunity for consistent ethical pressure and for the attribution of responsibility to human beings and not to *inanimate objects. ⑤Questions about the ethics of artificial intelligence are questions about the ethics of the people who make it and the purposes they put it to. ⑥It is not the monster, but the good Dr Frankenstein we need to worry about most.

  翻译:显然,我们不能指望所有公司,更不能指望所有程序员,都表现出这种道德的微调。其他公司(也)将竞购五角大楼在美业务:(因此)Google要想获得Maven项目的合同,就必须先击败IBM、Amazon和Microsoft。而在以上所有案例中,涉及到的公司,也就是为其工作的人,都将积极参与,(负责)维护、调整和改进的工作。这为持续的道德施压以及责任归属,聚焦到人类而不是无生命的物体身上提供了机会。AI的伦理问题其实就是AI制造者的道德问题以及AI的使用目的问题。我们要担心的不是(AI)怪物,而是弗兰肯斯坦博士(制造AI的人)的好坏。

  点评:表明观点:AI伦理核心焦点在于人。①②首先指出:各大科技公司在争相以AI获利。③④句随后转而指出:但涉入其中的人员应该积极改善,关注道德问题。⑤⑥句最后概括指出:AI道德问题实际上是“那些制造使用它的人们的道德问题”,我们要担心的不是“科学怪人”,而是“好的博士”(制造科学怪人的人)

  (注:本文来自网络,如有侵权,请联系删除)

展开全文
公共课>英语>阅读

近期热点

相关推荐

2014考研英语冲刺:阅读拔高技巧详解

2014考研英语冲刺:阅读提分小技巧

2014年考研英语阅读理解高分手册

2014考研英语冲刺:快速定位阅读才是王道

2014考研英语冲刺:阅读真题解析(1)

2014考研英语冲刺:阅读真题解析(2)

2014考研英语冲刺:阅读真题解析(3)

大家都在看

2020考研初试考前准备须知大汇总

2020考研初试考前准备须知:考试细节篇

2020考研初试考前准备须知:初试考试时间篇

2020考研初试考前准备须知:准考证打印篇

【政治】考研政治每日一题 DAY27

【考研英语】每日一句长难句 DAY23

2020考研政治时政热点:2019年12月13日

跨考分校

加盟