Models Are Central to AI Assurance

Robin Bloomfield (City, University of London) and John Rushby (SRI)

ASSURE 2024 Workshop, part of IEEE 35th International Symposium on Software Reliability Engineering, Tsukuba, Japan, October 2024

DOI: 10.1109/ISSREW63542.2024.00078

Abstract

All interactive systems need a model of their world that they can use to calculate effective behavior. For assurance, the model needs to be accurate but, in autonomous vehicles and many other AI applications, the model is built by a perception system based on machine learning and the dependability perspective maintains that its accuracy cannot be assured. We outline this perspective and methods for providing assurance using guards and defense in depth, and we also outline predictive processing as a possible way to construct assured models. We then discuss LLMs, which typically lack explicit models of the world, and suggest possible mitigations for their correspondingly unpredictable behavior. Finally, we consider models in AGI.

PDF preprint

BibTeX Entry

@INPROCEEDINGS{Bloomfield&Rushby:Assure24,
	AUTHOR = {Robin Bloomfield and John Rushby},
	TITLE = {Models are Central to {AI} Assurance},
	BOOKTITLE = {ASSURE 2024, Proceedings of IEEE 35th International Symposium
		  on Software Reliability Engineering Workshops (ISSREW)},
	YEAR = 2024,
	PAGES = {199--202},
	ADDRESS = {Tsukuba, Japan},
	MONTH = oct
}

Having trouble reading our papers?
Return to John Rushby's bibliography page
Return to the Formal Methods Program home page
Return to the Computer Science Laboratory home page