Morality in machines?

10 July 2017

Credit: posteriori/Shutterstock

Virtual reality experiments into human behaviour suggest that moral decisions can be expressed algorithmically, suggesting the possibility of machine-based morality.

In the experiment, participants were asked to drive a car in a typical suburban neighbourhood on a foggy day when they experienced sudden unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualised by statistical models leading to rules, with an associated degree of explanatory power to account for the observed behaviour.

The research showed that moral decisions in the scope of unavoidable traffic collisions can be explained well, and modelled, by a single value-of-life for every human, animal, or inanimate object.

Leon Sütfeld, first author of the study, believes that until now it has been assumed that moral decisions are strongly context-dependent and therefore cannot be modelled or described algorithmically. "But we found quite the opposite. Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object." This implies that human moral behaviour can be well described by algorithms that could be used by machines as well.

The study's findings have major implications in the debate around the behaviour of self-driving cars and other machines, such as in unavoidable situations.

Professor Gordon Pipa, a senior author of the study, says that since it now seems to be possible that machines can be programmed to make human-like moral decisions, it is crucial that society engages in an urgent and serious debate: "We need to ask whether autonomous systems should adopt moral judgements: if yes, should they imitate moral behaviour by imitating human decisions? Should they behave along ethical theories – and if so, which ones? And critically, if things go wrong, who or what is at fault?"

Credit: posteriori/Shutterstock

As an example, within the new German ethical principles, a child running onto the road would be classified as significantly involved in creating the risk, and so less qualified to be saved in comparison to a nearby adult standing on the footpath as a non-involved party. But is this a moral value held by most people and how large is the scope for interpretation?

"Now that we know how to implement human ethical decisions into machines, we, as a society, are still left with a double dilemma," explains Professor Peter König, a senior author of the paper. "Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour; and secondly, if they are, [whether] machines should act just like humans."

The study's authors believe that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more commonplace. They warn that we are now at the beginning of a new era with the need for clear rules – otherwise, machines could start making decisions without us.

Credit: sciencedaily.com; Frontiers, home.frontiersin.org


Print this page | E-mail this page