The growing adoption of biometric identity systems, notably face recognition, has raised questions regarding whether performance is equitable across demographic groups. Prior work on this issue showed that performance of face recognition systems varies with demographic variables. However, biometric systems make two distinct types of matching errors, which lead to different outcomes for users depending on the technology use case. In this research, we develop a framework for classifying biometric performance differentials that separately considers the effect of false positive and false negative outcomes, and show that oft-cited evidence regarding biometric equitability has focused on primarily on false-negatives. We then correlate demographic variables with false-positive outcomes in a diverse population using a commercial face recognition algorithm, and show that false match rate (FMR) at a fixed threshold increases >400-fold for broadly homogeneous groups (individuals of the same age, same gender, and same race) relative to heterogeneous groups. This was driven by systematic shifts in the tails of the imposter distribution impelled primarily by homogeneity in race and gender. For specific demographic groups, we observed the highest false match rate for older males that self-identified as White and the lowest for older males that self-identified as Black or African American. The magnitude of FMR differentials between specific homogeneous groups (<3-fold) was modest in comparison with the FMR increase associated with broad demographic homogeneity. These results demonstrate the false positive outcomes of face recognition systems are not simply linked to single demographic factors, and that a careful consideration of interactions between multiple factors is needed when considering the equitability of these systems.