If the mode structures of the cockpit automation and of the pilot's mental model are both described as finite state machines (this representation is used in both computer science and human factors), then highly automated techniques known as "model checking" can systematically and exhaustively compare the behaviors induced by the software against those of the mental model. Discrepancies pinpoint areas where design changes, or revisions to training materials or procedures, should be considered. Model checking is a mature technique that is widely used in analysis of protocols and complex hardware designs. We illustrate the approach by demonstrating its ability to detect the "kill-the-capture" surprise in the MD-88 autopilot and further issues in proposed fixes to that problem. The approach does not supplant the contributions of those working in human factors and aviation psychology, but rather provides them with a tool to examine properties of their models using automated calculation. The approach can be used to explore the consequences of alternative designs and cues, and of systematic operator error, and to assess the cognitive complexity of designs. The paper avoids the complexities of mathematical logic and it is hoped that the description will be accessible to those from the human factors and software development communities for whom model checking may be new.
Adobe pdf,
postscript,
or
gzipped postscript
Microsoft Word
@INPROCEEDINGS{Rushby-etal:DASC99, AUTHOR = {John Rushby and Judith Crow and Everett Palmer}, TITLE = {An Automated Method to Detect Potential Mode Confusions}, BOOKTITLE = {18th AIAA/IEEE Digital Avionics Systems Conference}, YEAR = 1999, ADDRESS = {St Louis, MO}, MONTH = oct }