Opinion: If you think software code is ethically neutral, you’re lying to yourself

Deutsche Welle
7 Min Read

Google evangelist, Vinton Cerf, thinks there’s no room for philosophical thinking in programming self-driving cars. Just tell them not to hit things and we’ll be fine. DW’s Zulfikar Abbany takes issue.
It was an accident waiting to happen. Despite all efforts by Joachim Müller-Jung the conversation slowly but surely swerved towards self-driving cars. Up until then, I had been rather bored.

So it was Wednesday, and the third “Press Talk” at the 66th Lindau Nobel Laureate Meeting. And the topic was artificial intelligence (A.I.).

Müller-Jung, who’s the head of science and nature at the German daily newspaper “Frankfurter Allgemeine Zeitung,” had repeatedly said during his long and winding introduction, “We’re not going to talk about self-driving cars or ‘rogue A.I.’ here!” And the audience of journalists and young scientists laughed.

He had wanted to talk about quantum mechanics and invited quantum expert, Rainer Blatt, to do just that.

But Vinton (Vint) Cerf, a Google vice president and Chief Internet Evangelist, was also on the panel. So as soon as the floor was opened to questions, self-driving cars were in poll position.

Google, as you may know, is at the forefront of research into self-driving cars and the artificial intelligence they will need to get us from A to B, if we so trust them.

Quips about car crashes

Cerf had fun regaling the audience with stories of Google’s self-driving experiments and how one of their cars had hit a bus but that it was “only at about 3 kilometers per hour (2 mph),” and how in other cases Google’s autonomous cars had been rear-ended by other cars, driven by humans. The stupid, slow-reacting things.

But what happens, asked young scientist Mehul Malik, when a car is faced with the choice of either hitting a child in the road or swerving to avoid the child in a way that would endanger the safety of any passengers in the car.

The evangelist’s response was off-hand, to put it lightly. Such as befits an evangelist, Cerf has no understanding for people whose worldviews contradict his own.

He said there was no point in investing self-driving cars with the philosophical concerns that we face as humans. All you have to do is instruct autonomous cars, through the software and code you write, “not to hit anything.”

Fuzzy logic

I take issue with this idea, fundamentally. I don’t have half the brains Cerf inhabits. He, after all, is one of the “fathers of the internet,” having co-designed the TCP/IP software that underpins our global network of computers. He also won the 2004 Turing Award.

But I know for a fact there’s a problem with Cerf’s logic. It’s boarishly utopian.

We humans are philosophical. We can empathise and get confused. We observe degrees of truth, fuzzy logic. Computers don’t get confused, they either do or they don’t do, according to the instructions in the software that runs them. This may make it easier for self-driving cars to react faster in dangerous situations. But it doesn’t mean they will make the right decision.

To repeat, we humans are philosophical. And while we may no longer do the driving, we are still the ones who want to be driven. So it’s our responsibility to invest philosophy in the programming of self-driving cars. Without philosophy, we have no ethics or morals.

The problem goes even deeper, however. No matter what Cerf insists, it is impossible not to invest philosophy or ethics in our computer code. As soon as a programmer begins to type, they make decisions. They have to. And those decisions are based on the way they see the world. That is their philosophy – their interests and visions of the future, humanitarian, aesthetic or commercial.

For decades, tech utopians have peddled the erroneous belief that code is neutral, that it can do no wrong. It’s like advocates of America’s Second Amendment, the right to keep and bear arms, saying “guns don’t kill people, people do.” And it’s equally illogical. Of course, it’s people who kill. It’s also people who code. And people should take responsibility for their code.

For heaven’s sake, take responsibility man!

To suggest code is neutral and without philosophy or ethics is to suggest a future where we can happily say to bereaved parents, “I’m terribly sorry that my car ran over your child, but it was the car’s fault, I was in the back having sex.”

So, Mr Cerf, let’s program our self-driving cars with the highest priority instruction “not to hit anything.” Fine. That in itself is a philosophy. In any case, the majority of people are programmed, or conditioned, to behave that way. But we still have accidents and self-driving cars will too.

My question is, will you still be prepared to say there’s not room for philosophy, for ethics, when the insurance companies refuse to pay out and the car manufacturers refuse to take responsibility for bugs in their software? You may like to try. But someone will be making a decision not to pay out and it will be based on the way they see the world.

Share This Article