Automation surprises occur when an automated system behaves differently
than its operator expects. If the actual system behavior and the operator’s
“mental model” are both described as finite state transition systems, then mechanized
techniques known as “model checking” can be used automatically to
discover any scenarios that cause the behaviors of the two descriptions to diverge
from one another. These scenarios identify potential surprises and pinpoint
areas where design changes, or revisions to training materials or procedures,
should be considered. The mental models can be suggested by human
factors experts, or can be derived from training materials, or can express simple
requirements for “consistent” behavior. The approach is demonstrated by
applying the MurΦ state exploration system to a “kill-the-capture” surprise in
the MD-88 autopilot.
This approach does not supplant the contributions of those working in human
factors and aviation psychology, but rather provides them with a tool to
examine properties of their models using mechanized calculation. These calculations
can be used to explore the consequences of alternative designs and
cues, and of systematic operator error, and to assess the cognitive complexity
of designs.
The description of model checking is tutorial and is hoped to be accessible
to those from the human factors community to whom this technology may be
new.