Six hundred and ninety-four. That is the number of lives that could have been saved in France in 2016 if 30% of vehicles on the country’s road had been connected. At any rate, that is the opinion of the Institut Montaigne Think Tank, which last year conducted a very thorough study on the subject and therefore considers that such a percentage could prevent 20% of accidents, which last year killed 3,469 people in France. By extrapolation, if 100% of vehicles were connected, we can suppose that 60% to 70% of accidents would be avoidable. And we’re only talking here about connected vehicles, in other words those able to incorporate sensors for collision avoidance and excess speed control and to stop the vehicle from starting in the event of alcohol or drug consumption. If it was a question of self-driving vehicles, the majority of scientists agree that 90% of accidents could be avoided. The logic behind this is that these vehicles eliminate human error and their own error rate is vastly reduced. Clearly still not down to zero though as several incidents and accidents involving Teslas, at least two of which were fatal, reminded us in 2016.
How to limit the consequences?
So the fact remains that 10% of accidents are considered unavoidable. Unavoidable? To use the example of the video put online by the excellent TED-ED, if a car is driving behind a lorry when it sheds its load, it will only have three choices. Continue straight whilst braking and collide with this obstacle; move into the right-hand lane and crash into a car; move into the left-hand lane and collide with a motorbike. Where a driver will react, almost by reflex, the self-driving car will apply a rule: it cannot avoid the accident but will – and this is an overriding principle – try to limit the consequences. And this is where we really get to the heart of the matter. Because, applied on a case-by-case basis, what does “limit the consequences” mean? Let’s go back to the example of our car that sees a load fall off in front of it. Let’s suppose that there is one person inside the car, whereas the motorbike on the left is carrying two people and the car on the right a family of four. Will the connected vehicle sacrifice its passenger by crashing into the load? Will it, on the other hand, do everything possible to protect their life and so collide with the motorbike, sacrificing the two people instead? Will it create more risks for its passengers by careering them into the car on the right?
The ethical dilemma of self-driving cars by TED-ED:
No more manslaughter?
In no way does the technology address these decisions – and that’s fortunate. It will apply them but the decisions will have to be taken up front not by the manufacturers but by political decision-makers. Moreover, it is exactly this that is currently stopping the expansion of self-driving vehicles in many countries around the world. As we have seen, the technology is ready but legislative changes remain to be defined. They will be all the more difficult to make because the questions addressed above are just a tiny number of those that are being asked. Besides, talking about political decision-makers, which decisions will they take to decide what a life is worth? Let’s suppose that the President’s car finds itself in a situation where it either has to sacrifice the President and their driver at the expense of an average family. Must the President’s life be preserved at any cost? This may appear theoretical but it may also echo a real life situation, like last November when a car driver died as a result of François Hollande’s convoy passing by. The inquest had concluded that it was manslaughter, a concept that, in such a case, could quite simply no longer exist.