When original equipment manufacturers (OEMs) design their self-driving cars, theoretical problems become practical problems of engineering. While on the road, autonomous vehicles (AV) might face driverless dilemmas and, consequently, they would have to follow ethical guidelines to solve them. In the recent study "From driverless dilemmas to more practical commonsense tests for automated vehicles", Swiss Re's Luigi Di Lillo and a team of field experts put "common-sense" driving behaviour of AVs to the test, examining a topic that will become increasingly more relevant in the mobility space of the future: how do mobility players and engineers ensure that AVs demonstrate and comply with ethical driving behaviours when faced with real-world conditions that might force it, for example, to make a decision not previously tested?
ADAS-fitted vehicles, like the ones we see on the roads nowadays, help drivers perform certain maneuvers that, for a variety of reasons, they could not perform (being distracted, misevaluation of traffic dynamics etc.). Drivers are required to always be in control of their cars. AVs, on the other hand, make the driver's tasks redundant and, in some cases, might (and should) be superior in terms of capability to foresee, prevent and mitigate accidents than the average driver. In other words, ADAS-fitted vehicles are prescribed to perform a certain action whenever certain conditions arise. ADAS don't learn, they monitor, process, give inputs for/change status/flag and act. AVs, on the other hand, monitor, process, "learn", evaluate and act. They are "intelligent" beings.
If you expect to go to a dealer and have your own AV, then you'll have to wait quite a few years. However, AVs are already on some roads nowadays, operating within well-defined operational domains, and mostly used for fleet operation, ride-sharing and taxing services.
It's not so much a matter of technology but, rather and mostly, a matter of regulatory and public acceptance. The question is: why are people willing to accept and trust the un-deterministic multi-faceted behaviour of a human driver but, they have difficulty in accepting and trusting a possibly more deterministic and consistent (i.e. a given rule is fulfilled always in the same manner) behaviour of an artificial intelligence (AI)-powered car? Is it the subconscious acceptance/acknowledgement that AVs lack common sense? Or maybe the fact that we humans think that we know, and therefore can better predict, how other humans behave because we are of the same nature?
Common-sense driving entails more than a "simple" engineering solution, namely, more than the AV executing a set of control commands (steering, braking etc.) that achieve the desired transport goal, i.e. delivering occupants/goods from point A to B. Indeed, it implicates the broad, complex and interdisciplinary field of human factors: passengers and road users expect AVs to behave in a safe, predictable, reasonable, uniform, comfortable and explainable manner. Yet, these behavioural expectations represent a floor rather than a ceiling for AVs. AVs should be better than human drivers and set new norms that push everyone to improve.
By not just testing the AV's obedience to secondary road rules under ordinary conditions! Indeed, if we did so, then broader safer behaviours would not be guaranteed. We should go beyond redundancy to resiliency, namely defining a behavioural system that can solve problems in various driving settings. In the paper, we set out some ideas that we hope could inspire "edge case" testing of common-sense driving. These tests must cover a range of scenarios that are not typically encountered during a standard test for human drivers, but which may nonetheless arise on real roads, such as navigating around large debris, safely overtaking long lorries on a two-way road. These scenarios may vary in regard to multiple aspects, such as prevalence and ethical relevance, and be tested both in simulation and on real roads.
Transparency, vigilance, admission of missteps and continuous improvement are keywords here. A good ethical testing framework for AVs should guarantee the safety of the vehicles at every stage of development, deployment and operation. This means focusing not only on the product, but also on the company, and scrutinising factors such as corporate governance, design philosophy, evaluation and integration of standards, monitoring and updating. Companies should also be more forthcoming in saying why they think that what they are doing is reasonably safe and why we should believe them. If companies can legitimately convince consumers that they are transparent, vigilant and continuously improving, then consumers will reasonably trust them. Our analysis concludes that at least one key part of earning this trust will be convincing the public that their AVs behave with common sense. Swiss Re will play a pivotal role for the future of mobility in that it will be the connecting element among all stakeholders and the risk knowledge and transfer hub that could help find and level off the benefit/cost equation of the mobility ecosystem in the future.
Luigi Di Lillo is leading the Products and Partnerships team at Swiss Re -Business Unit Reinsurance. He is responsible for spearheading solutions around vehicle safety, electrification, connected cars and vehicles with increased degree of automation. Luigi is a steering committee member of the World Economic Forum Safe Drive Initiative aimed at creating a generalized framework around the assessment of autonomous vehicles and fostering their adoption. He has been at the core of the recent Swiss Re partnerships announcements with Veoneer and Toyota. Luigi has recently co-authored the paper "From driverless dilemmas to more practical commonsense tests for automated vehicles" and the white paper "The significance of progress".