In quantum mechanics, quantum correlation is the expected value of the product of the alternative outcomes. In other words, it is the expected change in physical characteristics as one quantum system passes through an interaction site. In John Bell's 1964 paper that inspired the Bell test, it was assumed that the outcomes A and B could each only take one of two values, -1 or +1. It followed that the product, too, could only be -1 or +1, so that the average value of the product would be
where, for example, N++ is the number of simultaneous instances ("coincidences") of the outcome +1 on both sides of the experiment.
However, in actual experiments, detectors are not perfect and produce many null outcomes. The correlation can still be estimated using the sum of coincidences, since clearly zeros do not contribute to the average, but in practice, instead of dividing by Ntotal, it is customary to divide by
the total number of observed coincidences. The legitimacy of this method relies on the assumption that the observed coincidences constitute a fair sample of the emitted pairs.
Following local realist assumptions as in Bell's paper, the estimated quantum correlation converges after a sufficient number of trials to
where a and b are detector settings and λ is the hidden variable, drawn from a distribution ρ(λ).