The Ethics of the Driverless Car

Richard Wallace

When a new technology emerges it’s easy to get carried away by the possibilities, but inevitably any new technology or method of thinking will encounter unforeseen difficulties when it comes to implementation. Part of the reason Alphabet’s virtual city is a useful innovation tool is to develop new ideas and models outside of the restrictions of everyday life, which is often crucial to exploration—the problems inherent in the implementation of, say, traffic flow for driverless cars are a barrier to breakthroughs that might eventually go on to inform how the driverless model might work in reality.

But obviously we have to think about some of the issues that arise when we try to implement ideas that work perfectly in isolation and balance them against everyday behaviour. A theory is only as good as how you integrate it with current practices and infrastructures, especially as it is impossible to create a total shift overnight. While Silicon Valley may hope, optimistically, that their utopian technologies could be adopted rapidly for the benefit of society, in truth humans are by nature stubborn, inconsistent and often resistant to change, even if the benefits are immediately evident to them. There is no way a technology like driverless cars could be introduced without being able to coexist with how we currently use and understand roads and traffic systems, and during this transition phase while our behaviour adapts, there are bound to be issues that we simply didn’t predict on paper.

The debating website Kialo is currently hosting a relevant discussion called Who Should Driverless Cars Kill, an alarmingly-titled back-and-forth about the relative ethics of driverless cars, something that many of us might not have immediately considered when presented with the option of a more efficient and green traffic solution. The premise is that when a traffic collision happens currently it is usually a result of human judgment (or misjudgement) which allows us to effectively implement safety measures like driving tests and licenses, insurance, speed limits, motoring laws and traffic management systems.

But while driverless cars are autonomous, they are not fully so—they are a result of programming, which shifts much of the burden of these safety measures into the computer lab. The potential dangers of driverless traffic accidents are not entirely contingent on human interpretation of a given situation, or decisions made in the moment according to the judgement of the involved drivers. Because such vehicles require programming in advance for certain eventualities, and are merely guided rather than fully operated by human drivers, we are required to discuss the ethical framework that dictates how the vehicles are coded. If, Kialo posits, an unpredictable incident such as a jaywalker springing into the road is presented to a driverless car and the vehicle, hypothetically, cannot avoid a potentially lethal manoeuvre—either accepting the collision or veering off the road—should the car be programmed to act in the interests of the driver or the passenger?

Perhaps this seems like a far-fetched scenario, given that drivers of autonomous vehicles would be able to override the car’s programmed function and may even feature technology that is more efficient and recognising sudden hazards, like jaywalkers, than humans are. But we know from the automation of the airline industry that if something does go wrong, pilots are required not only to analyse the situation themselves but also to understand complex systems at high speed, which can be especially difficult if the usually infallible safety protocols have robbed them of the necessary experience of high-pressure problem-solving. Boeing’s Delmar Fadden, talking about aircraft CRM systems (which includes fly-by-wire autopilot) says “I’m going to cover the 98 percent of situations I can predict, and the pilots will have to cover the 2 percent I can’t predict,” meaning that pilots often have to rely on judgements that they only get to exercise 2 per cent of the time. As Vanity Fair puts in the linked account of the doomed 2008 Air France flight from Rio de Janeiro to Paris, “automation complexity comes with side effects that are often unintended,” as forewarned in Wiener’s Laws—which include “Exotic devices create exotic problems,” and “digital devices tune out small errors while creating opportunities for large errors.” We have to assume that Wiener’s Laws also apply to automotive automation.

All of which essentially means that the onus is on driverless vehicle manufacturers to underlay the programming of their systems with an ethical constitution. If an event occurs which interrupts the 98% of predictable scenarios, will the vehicle be duty-bound to protect its owner, or sacrifice the safety on the driver and protect the pedestrian? How do we calculate the resulting liability and judge who is at fault, as we currently do for insurance and legal purposes?

This isn’t to suggest that automation doesn’t provide an opportunity for far greater safety overall—quite the opposite. In fact, airline automation has seen aircraft crashes so widely reduced in recent years that some crash investigators have chosen to retire early for a lack of work. But the issue at hand isn’t about the benefits of automation—it’s about how to implement this goal in a consistent, safe, predictable way.

The Kiara debate posits that “the adoption of self-driving cars will be limited if people are afraid that their car may deliberately choose to harm them,” which may set alarm bells ringing for those who believe that market incentive is less important to protect than individual human lives, even if they simultaneously hold the utilitarian position that “it is better for self-driving cars to be adopted as soon as possible because the technology saves so many lives. Any lives lost due to unavoidable accidents are less important than this wider goal.” Whether or not you agree with this argument over the the idea that “it is immoral for self-driving cars to deliberately favour one human life over another…the only moral option is to choose arbitrarily,” the fact remains that a tranche of unexamined ethical deliberations lurks below the surface of the driverless cars debate. On a purely functional level, driverless technology is going to go wrong at some point. Who should shoulder the biggest share of that burden?

Successful innovation is a mediation towards these kinds of solutions. Problems are often larger and more complex than we necessarily realise, especially since those developing new technologies cannot always be expected to be the ones who settle these questions. Arguably, it would get in the way if broad-thinking developers were forced to reckon with the practical minutiae of their creations at the same time as developing the top-line technology. But at a time when tech is galloping forward at an unprecedented rate, start-ups are developing private alternative to public services and tech companies are powerful enough to alter human moods and influence political stability, it pays to respect the paradox of innovation. Making something possible is only half the battle; making it work is just as important. Development needs regulation, regulation demands safety, and safety is often a product of development; but the elements in this cycle are often at odds with each other.