The Worth of a Dilemma
I listened to a podcast today on “Radiolab” (https://radiolab.org/podcast/driverless-dilemma-0923) that dealt with research on the so-called “trolley problem.” Briefly, it is an ethical dilemma in which someone sees a trolley in the distance bearing down on a group of five people. If they throw a switch at this minute, they can save the five people, but there is one person on the part of the track to which the train will switch who will die if you do it. Do you hit the switch? The podcast considered brain research that has been done on what parts of the brain activate during this problem and what parts activate when asked the same problem but one is required to push someone onto the tracks. The show then applied this to questions about AI in driverless vehicles.
The whole thing just didn’t sit quite right with me. I got
to thinking about it and realized that what I didn’t like was the whole idea of
creating an ethical dilemma. At the heart of a dilemma is that you have to
choose one of two options, neither of which is a good one. But the ethical
dilemma forces you to make a choice. These types of dilemmas are ones that are typically
used to put our moral proclivities clearly on the table and to expose our
deepest convictions. But what if this is a mistake? What if, by merely posing a
dilemma, we are actually forcing the brain to create neural pathways that will prioritize an either-or choice in
decisions? Why does this matter? Maybe facilitating the development of such pathways
in the brain keeps us, in the moment of real-world decisions, from entertaining
the possibility of a third, fourth, or fifth choice. Maybe the repeated process
in our lives, listening to teachers or speakers who are seeking to challenge us
or completing personality surveys that force us to choose one of two things in
a situation – maybe they train our brains away from the kind of thinking that
will actually enable us to respond well in situations that require difficult
and (even if rapid) nuanced evaluation, judgment, wisdom. Maybe we do ourselves
no favors by thinking that we are preparing ourselves for tough decisions through
exercising our brains with such quandaries.
There is a technical extension to this problem as well. It was
suggested by a variation of the trolley problem that was raised for driverless
vehicles. If a group of five people appear in the road before a driverless car,
should we program the car drive into a barrier and kill the passenger or should
the passenger be saved and the others killed? I am not an AI engineer, but I do
understand that, just like our brains, AI has to be taught. And if we program
ethical dilemmas into our artificial intelligence or even program that
particular either-or choice, perhaps we are keeping the technology from attempting
to solve the problem more effectively than an either-or scenario suggests – and
maybe to solve the problem so that no one dies.
So are ethical dilemmas helpful? Or are we training
ourselves and our technology to be as dangerously polar as our politics have
turned out to be?
Comments
Post a Comment