How Safe Is Safe For An AV ? The Answer (Expectation And Communication)

Ken Ford is a well recognized expert in the field of artificial intelligence. In his talk “On Computational Wings: The Prospects & Putative Perils of AI,”  Ken discusses the progress of flying machines and compares them to their natural cousins — birds.  We all know that artificial flying machines (planes, helicopters) do not flap their wings, and this makes perfect sense.  He goes on to build an analogy to artificial intelligence and the dangers of expecting human-like behaviour from artificial intelligence. In addition, an Einstein quote “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid,” seems to also apply. That is, it may be best to use AI where appropriate as opposed to trying to mimic human behavior. 

What does this have to do with the expectations of safety for Autonomous Vehicles (AV) ?

Today,  the open question in the AV community is “What level of validation is sufficient to be acceptable to the public ?”  Some would more coarsely say, “Humans kill over 40,000 people yearly, but one AV accident seems to be a show-stopper.”  This group would advocate a more aggressive deployment approach with the notion that the result MUST be better than the human alternative.   The counter argument to the above is: “What accidents will AVs get into which humans could have easily handled?”  

How can one reasonably address these issues ?   There seem to be two important ideas at play — Expectation and Communication. Let us consider each.

Expectation:  

Today, AVs (level 2 and above) are placed into the public road system with a footprint which is exactly that of a human.  With this use-model, AVs inherit all the expectation attributes of a human driving the car. This violates one of the key insights from Dr. Ford. Indeed, as we have discussed in “Is progress in AV technology gated by research in animal communication ?” these misexpecations have caused issues where the largest source of AV accidents is actually humans hitting AVs. Thus, the misalignment in expectations has a real cost.

If we are to follow this line of thinking, AVs must establish a clear expectation on their behaviour on the roads which is distinct from humans. Note, some in the engineering community would call this expectation setting as defining the Operational Design Domain (ODD). Their argument is that with the ODD one can build a validation and verification framework.  Today, clear ODDs do not exist for any SAE level of automation, and in fact, even ADAS, the lowest level of AV automation has no clear ODD definitions. Perhaps this is why consumers say they will not pay a lot for safety(survey). No one knows exactly what they are getting. There are standardization efforts such as the UN ALKS which are attempting to remedy this situation. It is all good work.

However, in the context of AV, a long ODD manual is unlikely to suffice in this context. Rather, one needs clear short easily understood concepts such that the broader public can absorb them quickly and efficiently.  The world of IT had to use this approach with reuse of concepts such as “file” and “window.” What is the equivalent for AVs ? This is an open question right now and cause of much of the angst. Examples such as conveying as discussed in “Will Truck Convoying Be The First Viable Commercial Application For AV Technology ?” are an interesting start.

Communication:

“Observability, predictability, and directability are the necessary minimum properties of teamwork, whether human or machine. “ said Dr. Ford.  In performing the driving task, humans communicate direction, intention, and risk constantly with each other. If one imagines the frightened face of a young teenage driver, one can understand the communication of risk. This communication combined with contextual understanding of the situation allows the transportation system to function efficiently. 

One might reasonably ask “How well do current AVs embody these properties? “   Well, AVs do not communicate any of these attributes in a visible way to third parties.  The lack of these communication mechanisms is a significant drawback for AVs. As the traffic police in Singapore pointed out in “Singapore And Autonomous Vehicles — Interesting Lessons In Governance, Planning, And Safety,”  we test humans on their drivers license exam for their ability to manage risk, how do we do that for AVs ? Good question !

This all brings us back to the main question:  When will AVs be accepted from a safety point-of-view ?  

The answer: When there are clear expectations around AV behavior with high level non-human connected ODDs AND the key elements of cooperative behaviour (observability, predictability, and directionality) are respected. 

Speak Your Mind

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Get in Touch

350FansLike
100FollowersFollow
281FollowersFollow
150FollowersFollow

Recommend for You

Oh hi there 👋
It’s nice to meet you.

Subscribe and receive our weekly newsletter packed with awesome articles that really matters to you!

We don’t spam! Read our privacy policy for more info.

You might also like

Op-ed: Investors need to know exactly what being a...

Ariel Skelley | DigitalVision | Getty ImagesNearly anyone can hang out a shingle and...

Neuroscience Could Be the Key to Getting People to...

Opinions about wearing masks and maintaining social distancing are sharply divided, largely along...

Centre Allows 5 States to Undertake Additional Borrowing of...

Representative photo. (Image: Reuters) ...

Six Essential Components Of A Sound Business Plan

If you are thinking about opening a business, planning is key to getting started...