The slippery slope of accepting casualties caused by self-driving cars

Last week’s first ever deadly accident with a self-driving car was tragic for the victim. It was also inevitable. No sane person would expect the technology already to be perfect. In fact, probably few would expect the technology to ever be so perfect that the number of (fatal) accidents could be reduced to zero.

It’s also clear that more severe accidents involving autonomous cars will follow. This reality might sound harsh when put into words, but everybody today participating in traffic (whether as driver, passenger, cyclist or pedestrian) is silently acknowledging the existing risk in the same way: We know that an accident could happen, but we consider the upside of mobility being much bigger than the risk of a crash. And rightly so.

There is a peculiarity with accidents involving autonomous cars though: The question of responsibility beyond the legal liability. As outlined in this piece, from a legal point of view, the emerging scenarios could probably be solved. But another issue remains: It is a basic rule of modern human civilization (outside of war zones) that if one person is harmed, then this person or his/her relatives and friends crave to see a face of someone who caused or was in some way participating in the harm – regardless of whether this person will be deemed legally responsible.

But if a self-driving car is killing a family member or a good a friend, this urge will remain unsatisfied. While we might be able to sue the company operating the car, or the developer of the algorithm, or while we might publicly blame the CEO of one of the companies, we intuitively know that the entity who caused us this suffering, remains faceless.

When the event of faceless machines accidentally killing humans in traffic becomes a norm – even if the total number of casualties would be much lower than the number of traffic deaths involving human drivers – this could challenge our whole perspective of death, of justice, and of interactions with machines, with unforeseeable consequences. In other words, it’s an extremely slippery slope.

Thus, as illogical as it seems that one death caused by a self-driving car creates a public outcry, while the fact that tens of thousands of human drivers and other participants that die in traffic every year on the roads is a strangely accepted status quo, there is a rational point to it: The intuitive understanding of the ethical implications of accepting a small death toll caused by self-driving cars (regardless of the liability issue). The consequences would certainly be unpredictable, and they center around a topic that the average human being is extremely uncomfortable with anyway (death).

I don’t dare to make a confident prediction, but I consider it at least one possible outcome of the aftermath of the recent incident that a reasonable hesitation among the public and among policy makers to come to conclusions about this tricky situation will force a complete, temporarily halt of ongoing experiments with autonomous cars on public roads.

=======
Sign up for the meshedsociety weekly email, loaded with great things to read about the digital world. Sent to more than 500 verified subscribers (March 2018).

2 comments

  1. As an independent scholar in artificial intelligence (AI), I have watched the rise of self-driven autonomous vehicles (AV) with a sense of dread and reluctance. About a year ago, in spring of 2017, a man named Brown was killed in Texas (I believe) when his Tesla AV accidentally and fatally drove under a semi-trailer truck that was crossing the highway. The light-colored semi was apparently indistinguishable from the sky, to the Tesla AV software. Mr. Brown had been an enthusiastic early-adopter of AV technology. But Elon Musk, founder of Tesla, did not bother to tell shareholders and the public about the first AV fatality for about six weeks, until a U.S. government announcement forced Mr. Musk’s hand and he had to acknowledge the fatality. Since that cover-up, I have very low respect for Musk/Tesla and Musk/OpenAI. Now with the Uber fatality, self-driving cars face a difficult future because moneyed interests will try mightily to force AI-driven cars down our collective throats. IMHO (in my humble opinion) we should wait for genuine, concept-based True AI before we let artificial Minds drive cars on public highways — and I have created three such incipient AI Minds in Forth for robots; in Perl for webservers; and in JavaScript for tutorial AI. I risk professional ostracism for coming out against current AV technology, but I believe in speaking my mind. -Arthur

    • Thanks for your contribution, insightful!
      “We should wait for genuine, concept-based True AI before we let artificial Minds drive cars on public highways”
      This makes a lot of sense. But what if this type of True AI will take decades to arrive? Or do you expect this to happen much faster?

Leave a Reply

Your email address will not be published. Required fields are marked *