Model-Centered Assurance For Autonomous Systems

Susmit Jha, John Rushby, and N. Shankar

Presented at SafeComp 2020, Lisbon Portugal (virtually), 15--18 September 2020. Published in Springer LNCS 12234, pp. 228-243.

DOI: https://doi.org/10.1007/978-3-030-54549-9_15

Abstract

The functions of an autonomous system can generally be partitioned into those concerned with perception and those concerned with action. Perception builds and maintains an internal model of the world (i.e., the system's environment) that is used to plan and execute actions to accomplish a goal established by human supervisors.

Accordingly, assurance decomposes into two parts: a) ensuring that the model is an accurate representation of the world as it changes through time and b) ensuring that the actions are safe (and effective), given the model. Both perception and action may employ AI, including machine learning (ML), and these present challenges to assurance. However, it is usually feasible to guard the actions with traditionally engineered and assured monitors, and thereby ensure safety, given the model. Thus, the model becomes the central focus for assurance.

We propose an architecture and methods to ensure the accuracy of models derived from sensors whose interpretation uses AI and ML. Rather than derive the model from sensors bottom-up, we reverse the process and use the model to predict sensor interpretation. Small prediction errors indicate the world is evolving as expected and the model is updated accordingly. Large prediction errors indicate surprise, which may be due to errors in sensing or interpretation, or unexpected changes in the world (e.g., a pedestrian steps into the road). The former initiate error masking or recovery, while the latter requires revision to the model. Higher-level AI functions assist in diagnosis and execution of these tasks.

Although this two-level architecture where the lower level does "predictive processing" and the upper performs more reflective tasks, both focused on maintenance of a world model, is derived by engineering considerations, it also matches a widely accepted theory of human cognition.

Paper

Available at Springer Link (likely paywalled), or here (unrestricted) PDF

Slides

PDF or 25 minute video

Second generation slides and video for IFIP WG10.4 meeting on Intelligent Vehicle Dependability and Security (IVDS), January 2021

PDF or 27 minute video (unfortunately, this video has audio dropouts; I suggest you just ride them out while reading the corresponding slides).

Alternatively, here's a recording from the actual workshop.

Third generation slides for a Newton Institute workshop, 26 July 2022. PDF

Cogsci background

The architecture developed in this paper is similar to that believed to best describe the working of the human brain. There are many fine books and papers on this, but for a quick local update, please take a look at the relevant sections in the 30-page introduction to this report on Technology and Consciousness .

Assurance Background

We advocate organizing assurance as an "assurance case". Here's our most recent paper and here's an overview of my other papers on this topic.

BibTeX Entry

@inproceedings{Jha-etal:Safecomp20,
  AUTHOR = {Susmit Jha and John Rushby and N. Shankar},
  TITLE = {Model-Centered Assurance for Autonomous Systems},
  PAGES = {228--243},
  CROSSREF = {Safecomp20}
}

@proceedings{Safecomp20,
  BOOKTITLE = {Computer Safety, Reliability, and Security ({SAFECOMP} 2020)},
  TITLE = {Computer Safety, Reliability, and Security ({SAFECOMP} 2020)},
  YEAR = 2020,
  EDITOR = {Ant\'{o}nio Casimiro and others},
  ADDRESS = {Lisbon, Portugal},
  MONTH = sep,
  SERIES = {Lecture Notes in Computer Science},
  VOLUME = 12234,
  PUBLISHER = {Springer}
}

  

Having trouble reading our papers?
Return to John Rushby's bibliography page
Return to the Formal Methods Program home page
Return to the Computer Science Laboratory home page