September 07: Discussion Topic

Notice: This month’s event is changed to Wednesday instead of the usual Thursday.

This month’s topic has two parts:

Part A

Autonomous cars and the ethics of sacrificing occupants for the greater good of society

This topic is in the news lately for two reasons, 1) a recent fatal accident involving a Tesla on “autopilot”, and 2) this recent study:

“The social dilemma of autonomous vehicles”
by Jean-François Bonnefon, Azim Shariff, Iyad Rahwan
Science, 24 Jun 2016: Vol. 352, Issue 6293, pp. 1573-1576

Read (or download) the PDF here:
http://science.sciencemag.org/content/352/6293/1573.full.pdf+html

Here’s a discussion of that journal paper in the mainstream media:

In A Deadly Crash, Who Should A Driverless Car Kill — Or Save?
http://www.huffingtonpost.com/entry/driverless-cars-moral-dilemma_us_576c1bd0e4b0aa4dc3d4b557

People can’t make up their moral minds about driverless cars.

In a series of surveys published Thursday in the journal Science, researchers asked people what they believe a driverless car ought to do in the following scenario: A group of pedestrians are crossing the street, and the only way the car can avoid hitting them is by swerving off the road, which would kill the passengers inside.

The participants generally agreed that the cars should be programmed to sacrifice their passengers if doing so would save many other people.

This, broadly speaking, is a utilitarian kind of answer — one aimed at preserving the greatest possible number of lives.

But there’s one problem: The people in the survey also said they wouldn’t want to ride in these cars themselves.

What do you think?

Some possible talking points:

1. It’s easy to pose the question: Which is better, to kill ten pedestrians and save one passenger, or sacrifice the passenger and save the pedestrians?

2. A much harder question is: How does the computer (the algorithm) decide how many people will be killed as a result of its available choices?

3. If question #2 is not answerable in real-time, under real circumstances, with sufficiently accurate probabilities, then perhaps the moral question posed in question #1 is moot. Is the default then to give priority to the passengers?

4. In question #1, there seems to be the assumption that all lives are equal for the purposes of deciding who dies and who lives. The authors of the journal paper question that assumption.

5. At the end of that journal paper, the authors raise several interesting questions including: Should governments regulate the moral algorithms that manufacturers offer to consumers?

6. Why have moral algorithms at all? Why not just default to maximizing passenger safety? Is that not what drivers presumably do now?

7. What are the legal ramifications of a moral algorithm making the wrong choice? And who decides? Is implementing moral algorithms in driverless cars just asking for (legal) trouble?

8. Are taxi drivers (or other professional drivers) required to sacrifice their passenger to save ten pedestrians? If not, then why should driverless cars be so burdened?

9. How often have human drivers had to make these kinds of moral choices regarding who to sacrifice? Has anyone ever been prosecuted for making the wrong choice?

From that same Huffington Post article:

. . . “To maximize safety, people want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs,” said Iyad Rahwan, a researcher at MIT and co-author of the study.

. . . But it’s the tragedy of the commons: “If everybody thinks this way then we will end up in a world in which every car will look after its own passenger’s safety and society as a whole is worse off.”

10. How does he know that “society as a whole is worse off”?

Part B (optional)Killer robots: What do people think about autonomous weapons?

Some possible talking points: