Bell's inequality violation experiments are becoming increasingly popular in the practical teaching of undergraduate and master's degree students. Bell's parameter S is obtained from 16 polarization correlation measurements performed on entangled photons pairs. We first report here a detailed analysis of the uncertainty u(S) of Bell's parameter taking into account coincidence count statistics and errors in polarizers' orientation. We show using both computational modeling and experimental measurement that the actual sequence of the polarizer settings has an unexpected and strong influence on the error budget. This result may also be relevant to measurements in other settings in which errors in parameters may have non-random effects in the measurement.
I. INTRODUCTION
Quantum optics experiments based on the generation of entangled photons pairs are useful pedagogical tools for undergraduate and master's degree students.1–5 Using the polarization properties of photons, they enable manipulation of the mathematical formalism in a simple two-dimensional space and illustrate the foundations of quantum physics: the preparation of quantum states and the probabilistic and statistical aspects of projective measurements. It is also possible with such experiments to introduce students to recent developments concerning quantum technologies and their applications to computing, communications, or sensing.6
The most striking of these quantum optics experiments is the violation of Bell's inequality7,8 that demonstrates the non-locality of quantum physics. It uses entangled photon pairs to distinguish experimentally between local-realistic and non-local-realistic theories. The Nobel Prize in Physics 2022 was awarded to Alain Aspect, John F. Clauser, and Anton Zeilinger “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science,”9 showing the significance of quantum entanglement in physics nowadays. In addition to research in quantum physics, such experiments were developed for pedagogical purposes some 20 years ago10,11 and have become widely available.12
Improving the significance of the violation and closing loopholes is a topic for the research literature. In this pedagogical paper, we focus instead on improving the measurement error. Usually, the uncertainty in Bell's parameter is calculated taking into account only coincidence counts statistics.10,12,13 In our setup, in which the polarizers are rotated by hand, we have to consider, furthermore, the experimental errors in the polarizers' orientations. Asking students to consider the trade-offs between these two main uncertainty sources leads to fruitful discussions.
Additionally, during a master's project, we realized that, unexpectedly, the actual experimental sequence of the polarizers' orientations had quite a significant impact on the error budget. As we shall see, this comes from correlations between the different measurements that make up S. We present these results not only because they may allow other instructors to help their students minimize the measurement error but also because we suspect that similar correlations between measurement results may occur unexpectedly in other systems, and we hope that our work will aid instructors in recognizing those correlations.
This paper is organized as follows: After the description of our experiment and the main theoretical aspects, Bell's parameter uncertainty is derived analytically by the Gaussian error propagation method. Then this uncertainty is numerically modelled by a Monte Carlo algorithm in order to better describe real experimental sequences. This method allowed us to optimize with a genetic algorithm, the measurement protocol of Bell's parameter in order to reduce the acquisition time or reinforce the strength of Bell's inequality violation. Finally, we give experimental evidence of our findings.
II. BELL'S INEQUALITY IN THE TEACHING LABORATORY
A. Experimental setup
The polarizers PA and PB are oriented, respectively, at angles α and β from the vertical axis (state ). We denote their eigenstates and . The polarizers perform the projection of the photon pair state onto , which is a measure of the polarization correlation of the photons of the pair. Interference filters (IFs) placed in front of Geiger mode avalanche photodiodes (APDs) transmit only photons in a spectral bandwidth of 10 nm centered at 810 nm. The count rates on each detector are denoted and . Coincidence detection on both detectors is performed in a temporal window with a field programmable gate array (FPGA) card (Altera DE2‐115) to record only the events that come from the same photon pair.16,17 The corresponding rate is denoted . Data are sent from the FPGA card to a computer every .
B. Theory reminder
C. Experimental results for Bell's parameter
A typical sequence of measurements is given in Table II of Appendix A. Since each of the four correlation coefficients of Eq. (6) requires four coincidence measurements, 16 values are measured for NA, NB, and Nc. The acquisition time for each polarization configuration is 10 s. The total number of involved photons pairs was found from the sum of two additional measurements: , from which we infer an incident pair rate . We compute S = 2.590 or S = 2.528 depending on whether or not we subtract the accidental coincidences.24 In practice, our students use the first value so we will do the same. To make sense of this raw number, we introduce the notion of error budget to our students, i.e., the estimation of the different sources of uncertainty in the measurement.
D. Analytical estimate of the uncertainty in Bell's parameter
As described earlier, Bell's parameter S is computed from 16 individual measurements labelled by . Each measurement involves three random variables ( ), where Ni is the actual number of coincidences recorded in the measurement and αi and βi are the actual polarizer orientations, which may differ from the desired value. We have motorized polarizer mounts (see below) but we don't use them in practical work with students because we think it diminishes the interest of hands-on experimentation. Thus, the main contributions to our error budget are the count rates and the accuracy of the orientations of the polarizers. To the best of our knowledge, only the counting uncertainty is usually taken into account. Photon pairs from spontaneous parametric downconversion follow a Poisson distribution,14 well approximated by a normal distribution for large numbers of pairs. We suppose, moreover, that the mean number of pairs involved in each measurement is constant and equal to Np. All Ni are then calculated from Eq. (3) with an associated standard deviation .
When uncertainty in the polarization orientation is included, it is usually assumed to follow a normal law centered on the nominal values (second and third columns of Table II in the Appendix) with the same standard deviation , accounting for the experimental imperfections when rotating the polarizers, either by hand or with motorized rotation mounts.
However, the above uncertainty analysis does not correctly model real experiments in which only one and not both polarizers are normally rotated between acquisitions. While the Ni can still be considered as independent variables, αi and βi are no longer independent, as one may have in practice or . The analytical Gaussian error propagation formula above, which assumes implicitly that both angles are reset for each measurement, can, thus, be considered as a worst case study. We will introduce a numerical approach to deal with such real experimental conditions.
III. BELL'S PARAMETER UNCERTAINTY MODELLING WITH MONTE-CARLO SIMULATIONS
In order to better determine the statistical uncertainty of Bell's parameter, we implemented a Monte Carlo algorithm.20,21 We first benchmarked our code against the analytical formula (Eq. (10)) using what we call in the following the standard sequence where both polarizers are reset for each of the 16 measurements. The process for estimating the statistical uncertainty for Bell's parameter is as follows:
-
Assign a random variable to each input parameter: in our case, the 16 triplets . The mean number of incident photons pairs Np being known, the random variable associated to Ni follows a Poisson distribution with a mean value given either in Eq. 3 (true Bell state) or in Eq. 5 (non-ideal Bell state). The angular errors are generated with a Gaussian distribution of width .
-
Calculate the associated Bell's parameter.
-
Repeat the above procedure Nruns times.
-
The uncertainty u(S) is then identified as the standard deviation σS of Bell's parameter calculated on the statistical ensemble.
We show in Fig. 2 the evolution of the standard deviation of Bell's parameter σS as a function of Nruns for different values of the polarizer orientation uncertainty using the true Bell state. We have chosen a large number of photons pairs Np = 10 000 so that only the angular error contributes significantly to σS. This figure illustrates that convergence of the Monte Carlo simulations is obtained typically for Nruns = 1000, whatever the chosen value of .
Figure 3(a) shows how this calculated standard deviation depends on both the mean total count number Np and the polarizers' angular uncertainty .
For perfect angular setting of the polarizers, (purple solid line), σS reduces to the counting error with a slight deviation from the normal value (dashed blue line) at low counts where Poisson and Gauss distributions actually differ.
Conversely, σS reaches a nonzero asymptotic value for large Np depending on the angular uncertainty . A linear fit (the inset Fig. 3(b)) gives . The slope is not significantly different from obtained with our analytical analysis.
These asymptotic behaviours establish the consistency of our analytical and numerical approaches. We can now use our numerical simulations in more realistic cases, where the experimenter may rotate a single polarizer from one measurement to another. As stated before, this operating mode induces correlations between the αi's and between the βi's that we don't know how to take into account analytically.
IV. OPTIMIZING THE EXPERIMENTAL PROTOCOL WITH MONTE-CARLO SIMULATIONS
A. Optimization with the simplest sequences
Intuitively, we may expect a reduced uncertainty in S by lowering the number of interventions of the experimenter. The three sequences shown in Fig. 4 are ones that minimize the total number of rotations for polarizers A and B. These have only NRot = 17 independent αi's and βi's instead of 32: for the first measurement, the experimenter sets both polarizers and then, for the remaining 15 ones, only one polarizer (A or B) is rotated each time. In the following, we call them short sequences.
Numerical implementation of such realistic sequences is straightforward. For the first measurement, both angles are randomly chosen. Then, from one measurement to the next, we choose a new random value of either α or β according to the specified sequence.
We compare in Fig. 5(a), the standard deviation of Bell's parameter for the standard measurement and the short sequences “Snake” and “Friezes 1 and 2” depicted in Fig. 4.
First, we observe that, globally, the short sequences give significantly better results, i.e., lower asymptotic uncertainty on Bell's parameter.
Second, we notice that the shortest sequences are not all equally effective (Fig. 5(b)) with Snake performing almost 20% better than Frieze 2. This is unexpected: it is not only the total number of rotations NRot that matters but also the order in which they are performed. Correlations of angular settings of the polarizers impact Bell's parameter in a quite intricate way.
B. OPTIMIZATION OF THE STANDARD DEVIATION OF BELL'S PARAMETER WITH A GENETIC ALGORITHM
Our inability to explain why some short sequences are better than others implies that there might be better measurement sequences than the ones considered thus far. To address this issue, an optimization algorithm is needed since the total number of possible protocols is . As we cannot perform an extensive exploration of the whole sequences space, we have chosen to use a genetic algorithm technique.22
First, we label the 16 different polarizers' settings as shown in Fig. 6. The visual representation of the experimental sequences as a path visiting each square of the checkerboard (Fig. 4) is equivalent to a single permutation of the sequence more suitable for computer handling.
Accordingly, the “Snake” and the “Frieze” are then encoded as the following permutations:
-
Snake:
-
Frieze 1:
-
Frieze 2:
We have chosen the following parameters for our genetic optimization process:
-
Np = 100 000 incoming pairs so that the standard deviation of Bell's parameter depends essentially on the angular uncertainty of the polarizers' orientations set to .
-
A true Bell state is considered for which the number of coincidences for each measurement is given in Eq. (3): .
-
The population is formed by 1000 sequences randomly chosen for the first generation (“parents”). We let the population evolve and select the 1000 “children” with lowest uncertainty on Bell's parameter for the next generation.22
-
For each “generation,” Nruns = 2000 iterations are computed for each “individual” in order to ensure the convergence of the Monte-Carlo simulation.
-
The genetic algorithm is used without any constraints: there are no limits on the number of polarizers' rotations (it is not restricted to one at each step), and there are no conditions between the first and the last position of polarizers.
Figure 7 shows an example of the optimization process provided by the genetic algorithm. For each generation, we display the average value and the lowest value of the uncertainty in Bell's parameter over the whole population. The efficiency of the algorithm is clearly shown by its convergence. The best reached value is almost half of the best one obtained for the simple sequences above (see Fig. 8). One quasi optimal sequence is the following: . It is quite non-intuitive with no particular pattern despite the high symmetry of Bell's parameter definition (Fig. 9).
Consequently, in the budget analysis framework presented before, for a given uncertainty level on S, our best sequence performs four times faster with . The prefactor 6 of in the error budget Eq. (10) can then be thought as a complicated function of the actual experimental sequence. The optimization we performed corresponds to the minimization of this function. We may assume that the reason why some sequences result in lower error is the compensation between uncertainties, as the angle settings are not independent. A discussion of the covariance terms in Eq. (8) is proposed in Appendix C. Even if they are not at the level of prediction of Monte Carlo simulations, taking into account covariances between angular random variables in the Gaussian propagation error formula shows error compensations and illustrates the fact than sequences having the same number of polarizers rotations do not have the same uncertainty on S.
We were so surprised by such a significant yet counter-intuitive improvement in the protocol that we thought it should be verified experimentally.
V. EXPERIMENTAL VERIFICATION OF THE QUASI OPTIMAL CONFIGURATION
A. Automation of measurements
Comparing two sequences is, however, quite a challenging task. Indeed, we need a reliable value not of Bell's parameter itself but of its uncertainty. We, thus, have to repeat the whole set of 16 measurements a sufficient number of times, say for each sequence. Moreover, the most important parameter is the angular uncertainty, unknown to the experimenter who would have to keep it constant over several thousands of angular settings. This is clearly not humanly feasible.
We, thus, automated the measurement of Bell's parameter with computer-controlled motorized rotation mounts for the polarizers. We programmed different measurement sequences, adding random errors in the angles to simulate human setting. These errors follow a normal law whose dispersion can be set to any value equal to or greater than the repeatability of our mounts.
In order to ensure that we were observing the asymptotic behavior at large Np in Fig. 8, we set the integration time long enough so that . We compared the “snake” and the “optimal” sequences for , which a human cannot do, , a good experimenter, and . We ran the whole set of measurements typically 100 times for each (see Table I).
For , both configurations give experimentally the same standard deviation as only the photon number statistics is involved. However, for either or , the “optimal” sequence has a reduced standard deviation of Bell's parameter as compared to the “snake” sequence. As predicted by the genetic algorithm, the optimal sequence does actually perform better.
It should also be noted that experimental values for the statistical uncertainty are quite close to those provided by the Monte Carlo models of Fig. 8, despite the fact that the experimental state we have created is not pure and, therefore, does not match the perfect Bell state used in these calculations. This means that the proposed optimal sequence is somewhat robust with respect to the experimental imperfections.
B. Decrease of σS with acquisition time
Because the coincidence detection circuit provides count numbers each 100 ms,16,17 it is possible to post-process the data in order to show how the standard deviation of Bell's parameter converges to its asymptotic value as the number of detected pairs increases in much the same way as is easily done numerically (Fig. 10).
Again, for , both sequences present the same behavior with a monotonic square root decrease in the uncertainty on S with the total number of detected pairs (not shown). More interestingly, as predicted by our Monte Carlo simulations, we observe that the experimental σS settles asymptotically to a finite value for and .
This experimentally demonstrates that there are protocols that are inherently more robust in dealing with handling errors by taking advantage of the correlations that exist in the definition of Bell's parameter.
VI. CONCLUSION
Our initial motivation to understand and quantify uncertainty in Bell's parameter S was to quantify the significance level and to increase the strength of the violation of Bell's inequality. However, during a more advanced student project, we uncovered a subtle and unexpected influence of the sequence of measurements performed on the measurement uncertainty. This allowed us to significantly improve the performance of our setup by optimizing the experimental protocol. This discovery may have implications for other experiments, even in completely different areas, as it often happens that an experimental result combines several individual measurements. We encourage readers to consider other examples where this effect may be seen.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
APPENDIX A: EXPERIMENTAL RESULTS FOR BELL'S PARAMETER MEASUREMENT
Below we provide a typical sequence of measurements for the determination of Bell's parameter. Each of the 16 values provided for NA, NB, and Nc is integrated over 10 s. The number of concidences Nc provided here is not corrected for accidental coincidences. Within this experiment, the rate of accidental coincidences is about .
APPENDIX B: ANALYTICAL CALCULATION OF THE UNCERTAINTY ON BELL'S PARAMETER WITH THE GAUSSIAN PROPAGATION ERROR METHOD
1. Coincidence counts contribution
2. Angular contribution
APPENDIX C: UNCERTAINTY WITH DEPENDENT RANDOM VARIABLES
With , we have and if the measurements i and j have a common angle. Using the formulas of the partial derivatives given in Appendix B 2, we obtain u(S) from Eq. (C1) and can compare with Monte Carlo simulations. Results are given in Table IV.
Compared to the value (Eq. (10) with and ), a reduction of the uncertainty is effectively observed when covariances are taken into account (negative partial derivatives) but not at the level of the prediction of Monte Carlo simulations or observed in experiments, especially for the optimal sequence. Uncertainty compensations may be more complicated than those of Eq. (C1) due to the nonlinear dependence of σS on the angles.
We developed an additional way to consider the uncertainty at each measurement of a sequence. We first generate values of Ni for an ideal and deterministic measurement using a particular sequence ( and ), resulting in . Then we successively replace each value of Ni with a new value calculated when the angles include Gaussian-distributed errors, and re-calculate the value of S(i) at each measurement. We repeated this process, averaging together the values of S(i) obtained at each step until the convergence of the standard deviation of the statistical distribution of S(i). For a given sequence, is equal to its standard deviation σS.
Results are shown in Fig. 11 for the following sequences: the “Snake,” the “Friezes 1 and 2,” the optimal given by the genetic algorithm, and a sequence called “1:16” which corresponds to the sequence where we have added a new Gaussian-distributed error to the angle at each step, when a polarizer is rotated.
The evolution of at each measurement is shown. The unexpected behaviour is for the optimal sequence, the uncertainty in Bell's parameter stays stable after the measurement, implying that error compensation occurs at each polarizer rotation after that point. For other sequences, they keep increasing toward to their value given in Table IV.