We should spend more time thinking about how we live with self-driving cars than about who they will kill
Analysis: Questions of who among pedestrians are most dispensable aren’t ours to answer – and there are plenty of other ethical questions we haven’t yet considered, says Andrew Griffin
Yes, self-driving cars might be safer than traditional vehicles. But all cars are likely to end up causing death, and now we know which section of the population people believe is most dispensable in the event that autonomous vehicles should kill.
New research from the Massachusetts Institute of Technology examines the ethics of autonomous vehicles, and reports the result of a survey of more than 2 million people that explored how key ethical questions should be answered. A vast online survey asking recipients what sort of people they think their robot cars should kill might sound like dystopia – but what’s just as worrying is that none of it might matter at all.
The new paper reveals all sorts of anxious questions about the future of driverless vehicles. Chief among them considers what should happen in situations when a vehicle cannot avoid causing a fatality but will have to make the choice about who the victim or victims will be. Researchers found a wide variety of thoughts on the subject, but that generally people wanted to save humans over animals, more people over fewer, and that older people should be killed to save younger ones.
But the really troubling question is whether the important people are listening. Time and time again, the technology industry has shown itself largely uninterested in the ethics of what it is doing, at least until it is too late. As potentially deadly driverless cars head out onto the streets, that becomes more of an urgent concern than ever.
Despite the recent study, the ethical questions about who self-driving cars will target if killing a human being is unavoidable may, therefore, end up being moot. The histories of both technology and capitalism show us that ethics are almost certainly not going to matter: self-driving cars will be about empowering their owners, and their central concern will be that the people buying them don’t die in any crash. It’s nice to think that people will buy virtuous cars, but they won’t; they’ll buy protective cars, built to ensure they themselves are safe.
It is not only in the field of ethics and injuries that the current approach to self-driving cars is lacking. The same focus on the good of the individual owner above all else is present in just about every discussion about autonomous vehicles. Climate change and other pressing concerns leave us standing at a moment where we can make profound changes to the way we live – and where we must make those changes if we want the human race to continue. But the companies with the power to push changes through have shown an unwillingness to do so.
It’s possible to make a good argument that everything technology companies decide about automated vehicles is going to be flawed, because they simply are not set up to do it well. What they excel at is creating new ways to do things more quickly beyond all else, and often sacrificing other concerns – but cars are a particularly key issue.
For instance, city planners and other experts have repeatedly warned that more must be done to the design of cities if they are going to accommodate self-driving cars in a way that doesn’t disrupt the lives of people who live and work in them. The UK government, like others, has invested huge amounts of money and reputation on self-driving car technologies, but not on the cities that will host them.
The ways that autonomous vehicles could alter how we live are radical and potentially thrilling: public vehicles that make their way around towns and so cut down on car ownership, for instance, would have huge benefits and new modes of transport could even deal with the housing crisis. But the tech and automotive companies have been more interested in the developing technology, not towns, and we are left with a vision of private vehicles careening around the roads and occasionally opting to kill old people.
The future of cars is bound up with the future of our civic space, of the public sphere, and of the very ways that people both bond with each other and get around. Technology companies for all their innovative breakthroughs have found themselves entirely ill-prepared to deal with such questions: they have, indeed, radically altered our idea of communities, but often for the worse. Facebook has approached this question most radically, and the changes it has made to communities have led to people fighting among themselves online and monetising their discussions.
None of this, of course, is to suggest that academics shouldn’t be doing research into the ethics of driverless cars; it is incredibly important, if only in offering a framework against which corporations will be judged. Rules can be valuable even if they are honoured more in the breach than the observance, and it is useful to know what a utopian version of self-driving car ethics might look like, just so we know how far away we are from that ideal world.
But the most depressing thing is that studies like these take so much as a given. Of course there will be cars, we are forced to think, and of course there will be killing. Isn’t it sad that so many of our conversations about self-driving vehicles are about death. We’d do well to spend a little more time thinking about how we live with them instead.