Can We Trust Autonomous Systems and Seeing the Classics at the Technik Museum Speyer

View of the organ and vintage cars from the 2nd floor of the Technik Museum Speyer
Joseph Sifakis discusses autonomous systems at HLF 2019
Joseph Sifakis discusses autonomous systems at HLF 2019

In Tuesday’s opening lecture at the Heidelberg Laureate Forum (HLF), Joseph Sifakis, 2007 Turing Award winner, discussed whether we can trust autonomous systems and considered the interplay between the trustworthiness of the system – the system’s ability to behave as expected despite mishaps – and the criticality of the task – the severity of the impact an error will have on the fulfillment of a task.

Sifakis defined autonomy as the combination of five complementary functions – perception, reflection, goal management, planning, and self-awareness/adaption. The better a given system can manage these functions the higher the level of autonomy we say that it has, from 0 (no automation) to 5 (full automation, no human required).

New trends in next-generation autonomous systems

He also noted that historically the way that the aerospace and rail industries attempted to resolve the tension between criticality and trustworthiness is very different from the current standards in autonomous vehicles (AV). According to Sifakis:

  • “AV manufacturers have not followed a “safety by design” concept: they adopt ‘black-box’ ML-enable end-to-end design approaches
  • AV manufacturers consider that statistical trustworthiness evidence is enough – ‘I’ve driven a hundred million miles without accident. Okay that means it’s safe.’
  • Public authorities allow ‘self-certification’ for autonomous vehicles
  • Critical software can be customized by updates – Tesla cars software may be updated on a monthly basis.’ Whereas aircraft software or hardware components cannot be modified.

However, in order for society to truly trust autonomous vehicles scientist must offer greater levels of validation. Sifakis argued simulation offers a way to validate these systems without running them in the real world, but they are only as realistic as they are programmed to be. In order for simulation to truly provide a benefit, it must be realistic and offer semantic awareness, through exploring corner case scenarios and high risk situations.

Shaping factors of the automation frontier
Shaping factors of the automation frontier

We must also consider the human factors that shape the automation frontier. Generally human are more willing to trade quality of service for performance in low criticality situations, while human beings are less forgiving an of autonomous system failures that lead to serious accidents or deaths than they would be if that same event was caused by a human being.

View of the organ and vintage cars from the 2nd floor of the Technik Museum Speyer
View from the 2nd floor of the Technik Museum Speyer

Following the day’s lectures, all of the HLF participants took a trip to the Technik Museum Speyer for dinner. The museum was open for participants to explore: standout exhibits included many vintage cars, a large organ, the spaceshuttle BURAN, and a Boeing 747. The Boeing 747 was displayed about 4 stories in the air and your option to exit from that height included the stairs or a steep slide. If you ever have a chance to visit, I would definitely recommend taking the slide down.

A Boeing 747 with a slide
A Boeing 747 with a slide

Avatar photo

Posted by

Khari Douglas is the Senior Program Associate for Engagement for the Computing Community Consortium (CCC), a standing committee within the Computing Research Association (CRA). In this role, Khari interacts with members of the computing research community and policy makers to organize visioning workshops and coordinate outreach activities. He is also the host and producer of the Catalyzing Computing podcast.

2 comments

  1. Khari Douglas wrote (25. September 2019):
    > In Tuesday’s opening lecture at the Heidelberg Laureate Forum (HLF), Joseph Sifakis, 2007 Turing Award winner, […] considered the interplay between the trustworthiness of the system – the system’s ability to behave as expected despite mishaps – and the criticality of the task – the severity of the impact an error will have on the fulfillment of a task.

    In that lecture, Sifakis presented this slide (https://scilogs.spektrum.de/hlf/files/20190925_094509.jpg) with a diagram of “Task Criticality (0 … 1) over System Trustworthiness (0 … 1)”, which I find curious for two reasons:

    (1) The entire range of the diagram appears populated; especially including the extreme corner of “Task Critically” near or at 1, and “System Trustworthiness” near 0. Does Sifakis thereby consider and illustrate highly critical tasks being given to systems which were already known or expected initially to have very low system trustworthyness ? Or is he treating the low value of system trustworthyness as a conclusion of systems having failed at highly critical tasks ? …

    (Perhaps a slide which is explicitly restricted to “tasks successfully accomplished” might appear less puzzling.)

    (2) The label “Trusted Human” appears in the diagram in a region corresponding to rather low values of “System Trustworthiness”. …

    (Perhaps a more digestable label (in that particular region) might be “Tasks given to Humans (of varying System Trustworthiness)”.)

    p.s.
    > […] ‘black-box’ ML-enable[d] end-to-end design approaches […]

    There, “ML” surely stands for “machine learning”.

  2. Khari Douglas wrote (25. September 2019):
    > In Tuesday’s opening lecture at the Heidelberg Laureate Forum (HLF), Joseph Sifakis, 2007 Turing Award winner, […] considered the interplay between the trustworthiness of the system – the system’s ability to behave as expected despite mishaps – and the criticality of the task – the severity of the impact an error will have on the fulfillment of a task.

    In that lecture, Sifakis presented this slide (/hlf/files/20190925_094509.jpg) shown in the above SciLog article, with a diagram of “Task Criticality (0 … 1) over System Trustworthiness (0 … 1)”. I find this diagram curious for two reasons:

    (1) The entire range of the diagram appears populated; especially including the extreme corner of “Task Critically” near or at 1, and “System Trustworthiness” near 0. Does Sifakis thereby consider and illustrate highly critical tasks being given to systems which were already known or expected initially to have very low system trustworthyness ? Or is he treating the low value of system trustworthyness as a conclusion of systems having failed at highly critical tasks ? …

    (Perhaps a slide which is explicitly restricted to “tasks successfully accomplished” might appear less puzzling.)

    (2) The label “Trusted Human” appears in the diagram in a region corresponding to rather low values of “System Trustworthiness”.

    (Perhaps a more pertinent label, to be put in that particular region, might be “Tasks given to Humans of varying System Trustworthiness”. …)

    p.s.
    > […] ‘black-box’ ML-enable[d] end-to-end design approaches […]

    There, “ML” surely stands for “machine learning”. [ Link to the Wikipedia article on “Machine learning” not explicitly provided since the maximum number of links permissible in a comment to this SciLog has unfortunately not been documented. — FW]

Leave a Reply


E-Mail-Benachrichtigung bei weiteren Kommentaren.
-- Auch möglich: Abo ohne Kommentar. +