If a robot can commit a crime, should they be charged? And how do you punish a robot?
In 2017, the European parliament received a draft report from the commission of legal affairs about robots. With robotics progressing at a rapid rate, the report focuses on the concept of liability. For example, who is at fault if a robot injures someone or damages property? According to the report, depending on the level of automation and autonomy, the robot may be more responsible than its creators.
Who’s in charge?
In the matter of who’s to blame, the report suggests if a robot can only do the tasks it’s been programmed to do, the creators are at fault for any damage, because the robot is acting as a tool.
But when a robot can use machine learning and artificial intelligence to adapt to its environment, the robot would be at fault. This helps in terms of knowing ethically who to blame, but begs the question – how do we punish them if they are guilty?
The report outlines a few suggestions for how we might make a robot pay for its crimes.
Firstly, it’s suggested there should be a sort of classification system for robots, and when robots reach a certain threshold of sophistication, they must be registered with the European Union.
Secondly, the report suggests a required insurance plan where the manufacturers must pay insurance for the robots they make. Kind of like car insurance, except the manufacturers pay, not the owners.
A slightly more abstract suggestion is paying robots ‘wages’. No, this isn’t so the robot can save up for a nice holiday - the wages would be used to create a compensation fund in case the robot is liable for any damages down the road.
Do we treat robots as humans?
The draft report even toys with the idea of granting robots a status of personhood. Blurring the line between categorising humans and robots is a controversial topic, but the report insists this would be done for the benefit of humans, not robots.
For instance, what happens if robots and automation start to replace more jobs than they create? Systems like welfare and government benefits depend on employment taxes and they can be underfunded if we don’t have enough people in the workforce. In classifying robots as persons in an employment setting, business owners who use robots in automated roles would be obligated to pay into taxes for the robot, as if human employees themselves were in those positions.
The beginning of a big conversation
While none of the report has been reflected in laws at this stage, what the report showed is government bodies are starting to take the issues of robots, artificial intelligence and ethics seriously. The report has since been frequently mentioned in the ongoing discussion of the ethical rights of robots, with exciting debate continuing into 2019.
Are you interested in using your thinking to challenge the modern legal and political environment? A Law degree from Murdoch University tackles real-world issues, and prepares you for the workforce of the future.