The ethics issue: Should we give robots the right to kill?

 作者:匡蜡     |      日期:2019-03-02 02:11:02
Kirk Marsh/Getty By Douglas Heaven Hot-headed, irrational and swayed by emotion – who’d want a human in control? If we could build machines capable of making tough choices for us, surely we should. That’s the line taken by people like roboticist Ron Arkin at the Georgia Institute of Technology in Atlanta. For Arkin, autonomous weapons – or killer robots – that remain rational under fire and behave exactly as they were trained to would be more humane than human soldiers in a war situation, and would save lives. We therefore have a moral imperative to create them. The same reasoning can be applied to many scenarios where human nature may stop us doing the right thing, from driving to making life-or-death decisions in hospitals to criminal sentencing. Computers are already moving into all these areas, and in many cases surpass humans where it counts. But how much autonomy should we give them? The problem with fully autonomous machines from a moral point of view is that they cannot take responsibility for their actions. Human ethics is built on the assumption that actions are done by agents with the capacity to make a call between right and wrong. If we offload those actions on to machines, who do we blame when something goes wrong? Filippo Santoni de Sio,