Barragan, D., Howard, John J., Rabbitt, Laura R., and Sirotin, Yevgeniy B.
PLoS One 17.11 (2022): e0277625
Publication year: 2022

Face masks, recently adopted to reduce the spread of COVID-19, have had the unintended consequence of increasing the difficulty of face recognition. In security applications, face recognition algorithms are used to identify individuals and present results for human review.
This combination of human and algorithm capabilities, known as human-algorithm teaming, is intended to improve total system performance. However, prior work has shown that human judgments of face pair similarity-confidence can be biased by an algorithm’s decision even in the case of an error by that algorithm. This can reduce team effectiveness, particularly for difficult face pairs. We conducted two studies to examine whether face masks, now routinely present in security applications, impact the degree to which this cognitive bias is
experienced by humans. We first compared the influence of algorithm’s decisions on human similarity-confidence ratings in the presence and absence of face masks and found that face masks more than doubled the influence of algorithm decisions on human similarity-confidence ratings. We then investigated if this increase in cognitive bias was dependent on perceived algorithm accuracy by also presenting algorithm accuracy rates in the presence of face masks. We found that making humans aware of the potential for algorithm errors mitigated the increase in cognitive bias due to face masks. Our findings suggest that humans reviewing face recognition algorithm decisions should be made aware of the potential for algorithm errors to improve human-algorithm team performance.