Moralities of Intelligent Machines

We are entering a new era of intelligent machines, but are yet to study human preferences for the moral decisions that these machines are expected to make. If a robot car is driving according to traffic laws, but a human driver makes a mistake, do we expect the robot to break the law in order to save its passengers and drive into a ditch? What if a care robot administers medicine to a patient who then dies - who do people find responsible for the death: the robot, the person who gave the medication to the robot, or possibly the hospital administration that purchased the robots?

Intelligent machines might sound like science fiction, but as Helsinki Challenge semifinalist team leader Michael Laakasuo points out, we are already living in a sci-fi world.

“Most of us have an elementary intelligent machine in our pockets. Smartphones make decisions for us all the time, choosing routes and giving directions, for instance. Robot cars can drive coast to coast in the US with 99 per cent accuracy. If that’s not living in the future for someone from 50 years ago, then what is? Robots might not walk among us, but they’re in the framework of society already.”

Should we punish a robot?

Laakasuo and his team also think that humanity needs to be prepared for moral questions that might arise when something goes wrong. The team plan to use qualitative methods, such as essay-writing inspired by videos to investigate what people think about fatal mistakes done by robots. They’ll also study how the phenomenon known as Uncanny valley influences the perception of robot morality, meaning an effect where people feel creeped out by robots who look and move almost like humans. Another part of research will focus on how human intuition for punishment functions with robots. It is possible that humans will deal with robots the same way they deal with babies or animals, or it could be that robots provoke previously unencountered moral reactions. In the long run, the team is hoping to create a whole new field of inquiry that concentrates on moral psychology of robotics, which could help whole industries working with new decision making robots.

Want to help this team? Become a Helsinki Challenge partner here.

TEAM: Team leader Michael Laakasuo (researcher, UH), Mikko Salmela (adjunct professor, UH, Dep. of Politics and Economy, Practical Philosophy), Jussi Palomäki, post-doc (Newcastle University School of Computing), Marianna Drosinou (PhD student, UH, IBS), Nils Köbis (PhD student, VU University of Amsterdam, Department of Social And Organisational Psychology), Markus Jokela (ass. professor, UH, IBS).

Team Partner