外刊精选(四)
Passage Two
Questions 51 to 55 are based on the following passage.
The AlphaGo program’s victory is an example of how smart computers have become.
But can artificial intelligence (AI) machines act ethically, meaning can they be honest and fair?
One example of AI is driverless cars. They are already on California roads, so it is not too soon to ask whether we can program a machine to act ethically. As driverless cars improve, they will save lives. They will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should the cars be programmed to avoid hitting a child running across the road, even if that will put their passengers at risk? What about making a sudden turn to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?
Perhaps there will be lessons to learn from driverless cars, but they are not super-intelligent beings. Teaching ethics to a machine even more intelligent than we are will be the bigger challenge.
About the same time as AlphaGo’s triumph, Microsoft’s ‘chatbot’ took a bad turn. The software, named Taylor, was designed to answer messages from people aged 18-24. Taylor was supposed to be able to learn from the messages she received. She was designed to slowly improve her ability to handle conversations, but some people were teaching Taylor racist ideas. When she started saying nice things about Hitler, Microsoft turned her off and deleted her ugliest messages.
AlphaGo’s victory and Taylor’s defeat happened at about the same time. This should be a warning to us. It is one thing to use AI within a game with clear rules and clear goals. It is something very different to use AI in the real world. The unpredictability of the real world may bring to the surface a troubling software problem.
Eric Schmidt is one of the bosses of Google, which own AlphoGo. He thinks AI will be positi
ve for humans. He said people will be the winner, whatever the outcome. Advances in AI will make human beings smarter, more able and “just better human beings.”
精读笔记
2
The AlphaGo program’s victory is an example of how smart computers have become.
But can artificial intelligence (AI) machines act ethically, meaning can they be honest and fair?
program可以删除吗
One example of AI is driverless cars. They are already on California roads, so it is not too soon to ask whether we can program a machine to act ethically. As driverless cars improve, they will save lives. They will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should the cars be programmed to avoid hitting a child running across the road, even if that will put their passengers at risk? What about making a sudden turn to avoid a dog? What if the only risk is damage to the car itself,
not to the passengers?
阿尔法狗围棋程序的成功是证明电脑已经变得多聪明的一个例子。(51)
但是,人工智能(AI)机器能够恪守伦理道德吗?也就是说,它们可以诚实和公正吗?
人工智能的一个例证就是无人驾驶汽车。(52-)它们已经行驶在加利福尼亚州的公路上了,所以我们现在询问人类能否给机器编入程序以使它们恪守伦理道德并不算太早。随着无人驾驶汽车的改进,它们将可以拯救生命。它们将比人类司机犯的错误更少。(52-)尽管如此,它们有时还将会面临生命的抉择。这些汽车是不是应该被编入程序以避免撞到跑着过马路的儿童即使那样会使车上的乘客陷入危险中?如果是为了躲避一只小狗而急转弯?如果这样做仅仅会造成车辆损伤而不会伤害乘客呢?
ethically adv.合乎道德地
sb act ethically 某人恪守伦理道德,这里的act也可以换成behave,比如:The committee judged that he had not behaved ethically. 委员会裁定他的行为违背了道德标准。
【同源词】ethical adj.[ˈeθɪkl]connected with beliefs and principles about what is right and wrong(有关)道德的;伦理的
ethical issues/standards/questions有关道德的问题;道德标准 / 问题
the ethical problems of human embryo research人类胚胎研究的伦理问题
ethics [复数] n.moral principles that control or influence a person's behaviour 道德准则;伦理标准
professional/business/medical ethics职业 / 商业道德;医德
driverless cars无人驾驶汽车
无人驾驶汽车也可以表达为autonomous cars, driverless vehicles, self-driving cars,一篇以无人驾驶汽车为主题的文章可能会多次提到“无人驾驶汽车”这一主题词,因此我们需要积累无人驾驶汽车的不同表达方式。
program作名词是“程序;编码指令”比如:Load the program into the computer.把程序输入
电脑。作动词是“编程”比如:He programmed his computer to compare all the possible combinations. 他给他的计算机编制了一套程序,以比较所有可能的组合。
【语法点睛】a child running across the road是现在分词作后置定语,我们可以将之还原为定语从句:a child(which is)running across the road
make a sudden turn 急转弯
急刹车可以说a sudden brake
Perhaps there will be lessons to learn from driverless cars, but they are not super-intelligent beings. Teaching ethics to a machine even more intelligent than we are will be the bigger challenge.
About the same time as AlphaGo’s triumph, Microsoft’s ‘chatbot’ took a bad turn. The software, named Taylor, was designed to answer messages from people aged 18-24. Taylor was supposed to be able to learn from the messages she received. She was designed to slowly improve her ability to handle conversations, but some people were teac
hing Taylor racist ideas. When she started saying nice things about Hitler, Microsoft turned her off and deleted her ugliest messages.

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。