Autonomous driving is an area which has seen rapid growth in recent years A long-held belief in this field is that more automation will equate more safety. However, some researchers continue to challenge this conviction in an argument for adaptive automation (Hancock et al., 2013).
In the context of driving, man-machine systems implementing adaptive automation are envisioned to continuously engage the driver in the driving task and at the same time, dynamically adapt the task-load depending on the driver’s momentary cognitive ability. A key step towards this approach is to continuously monitor the driver’s mental state and predict when the automation system should take more responsibilities and when to give them back to prevent drivers from mentally disengaging in the driving task.
Predicting mental workload has been done in recent studies using neuroimaging data (e.g., fNIRS; Unni et al., 2017; Scheunemann et al., 2019) but has come with limitations as different types of cognitive workload were interacting instead of adding at the brain level, which led to a decrease in prediction accuracy for two cognitive concepts relevant to driving: working memory load and visuospatial attention. In this project, we developed a cognitive model in the cognitive architecture ACT-R that integrates these two cognitive concepts to provide insights into how, when and where they interact.