What moral code should your self-driving car follow?

Teaching a robot to make ethical decisions is pretty complicated.

Imagine you are driving down the street when two people — one child and one adult — step onto the road. Hitting one of them is unavoidable. You have a terrible choice. What do you do?

Now imagine that the car is driverless. What happens then? Should the car decide?

Until now, no one believed that autonomous cars — robotic vehicles that operate without human control— could make moral and ethical choices, an issue that has been central to the ongoing debate about their use. But German scientists now think otherwise. They believe eventually it may be possible to introduce elements of morality and ethics into self-driving cars.

To be sure, most human drivers will never face such an agonizing dilemma. Nevertheless, “with many millions of cars on the road, these situations do occur occasionally,” said Leon Sütfeld, a researcher in the Institute of Cognitive Science at the University of Osnabrück and lead author of a new study modeling ethics for self-driving cars. The paper, published in Frontiers in Behavioral Neuroscience, was co-authored by Gordon Pipa, Peter König, and Richard Gast, all of the institute.

Veröffentlichung:
26. Juli 2017

Auto-mat ist eine Initiative von

TCS

Das Portal wird realisiert von

Mobilitätsakademie
 

in kooperation mit

Swiss eMobility

veranstaltungspartner

Schweizer Mobilitätsarena
 
 
 
Datenschutzhinweis
Diese Webseite nutzt externe Komponenten, welche dazu genutzt werden können, Daten über Ihr Verhalten zu sammeln. Lesen Sie dazu mehr in unseren Datenschutzinformationen.
Notwendige Cookies werden immer geladen