Bell's inequality violation experiments are becoming increasingly popular in the practical teaching of undergraduate and master's degree students. Bell's parameter S is obtained from 16 polarization correlation measurements performed on entangled photons pairs. We first report here a detailed analysis of the uncertainty u(S) of Bell's parameter taking into account coincidence count statistics and errors in polarizers' orientation. We show using both computational modeling and experimental measurement that the actual sequence of the polarizer settings has an unexpected and strong influence on the error budget. This result may also be relevant to measurements in other settings in which errors in parameters may have non-random effects in the measurement.

Quantum optics experiments based on the generation of entangled photons pairs are useful pedagogical tools for undergraduate and master's degree students.1–5 Using the polarization properties of photons, they enable manipulation of the mathematical formalism in a simple two-dimensional space and illustrate the foundations of quantum physics: the preparation of quantum states and the probabilistic and statistical aspects of projective measurements. It is also possible with such experiments to introduce students to recent developments concerning quantum technologies and their applications to computing, communications, or sensing.6 

The most striking of these quantum optics experiments is the violation of Bell's inequality7,8 that demonstrates the non-locality of quantum physics. It uses entangled photon pairs to distinguish experimentally between local-realistic and non-local-realistic theories. The Nobel Prize in Physics 2022 was awarded to Alain Aspect, John F. Clauser, and Anton Zeilinger “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science,”9 showing the significance of quantum entanglement in physics nowadays. In addition to research in quantum physics, such experiments were developed for pedagogical purposes some 20 years ago10,11 and have become widely available.12 

Improving the significance of the violation and closing loopholes is a topic for the research literature. In this pedagogical paper, we focus instead on improving the measurement error. Usually, the uncertainty in Bell's parameter is calculated taking into account only coincidence counts statistics.10,12,13 In our setup, in which the polarizers are rotated by hand, we have to consider, furthermore, the experimental errors in the polarizers' orientations. Asking students to consider the trade-offs between these two main uncertainty sources leads to fruitful discussions.

Additionally, during a master's project, we realized that, unexpectedly, the actual experimental sequence of the polarizers' orientations had quite a significant impact on the error budget. As we shall see, this comes from correlations between the different measurements that make up S. We present these results not only because they may allow other instructors to help their students minimize the measurement error but also because we suspect that similar correlations between measurement results may occur unexpectedly in other systems, and we hope that our work will aid instructors in recognizing those correlations.

This paper is organized as follows: After the description of our experiment and the main theoretical aspects, Bell's parameter uncertainty is derived analytically by the Gaussian error propagation method. Then this uncertainty is numerically modelled by a Monte Carlo algorithm in order to better describe real experimental sequences. This method allowed us to optimize with a genetic algorithm, the measurement protocol of Bell's parameter in order to reduce the acquisition time or reinforce the strength of Bell's inequality violation. Finally, we give experimental evidence of our findings.

Our graduate students use the experimental setup shown in Fig. 1 for measuring Bell's parameter.2,10 Entangled photons pairs are generated by parametric downconversion in a pair of type I beta-barium borate (BBO) crystals15 pumped by a 30 mW, 405 nm laser diode. A blue photon incident on a single BBO crystal is downconverted into a pair of two near-infrared photons at 810 nm (called the “signal” and “idler” photons) that are polarized perpendicular to the crystallographic axis and are emitted on an cone with a 3 ° half angle. The crystallographic axes of the two BBO crystals are oriented at right angles so that one crystal produces downconverted photons horizontally polarized (state | H s | H i ) and the second produces vertically polarized photons (state | V s | V i ). A half-wave plate ( λ / 2) and a quartz phase compensator (C) are used to equalize the relative weights and phases of the horizontal and vertical components so that the photons can be emitted in the theoretical state
(1)

The polarizers PA and PB are oriented, respectively, at angles α and β from the vertical axis (state | V ). We denote their eigenstates | V α = cos α | V sin α | H and | V β = cos β | V sin β | H . The polarizers perform the projection of the photon pair state | Ψ onto | V α | V β , which is a measure of the polarization correlation of the photons of the pair. Interference filters (IFs) placed in front of Geiger mode avalanche photodiodes (APDs) transmit only photons in a spectral bandwidth of 10 nm centered at 810 nm. The count rates on each detector are denoted ϕ A and ϕ B. Coincidence detection on both detectors is performed in a τ c 9.4 ns temporal window with a field programmable gate array (FPGA) card (Altera DE2‐115) to record only the events that come from the same photon pair.16,17 The corresponding rate is denoted ϕ c. Data are sent from the FPGA card to a computer every 100 ms.

We summarize here briefly the main theoretical aspects, following Ref. 10. The targeted Bell state | Ψ EPR is
(2)
This maximally entangled state19 is then projected on the eigenstates | V α | V β of polarizers A and B. The number of coincidences Nc during the acquisition time Tacq is23 
(3)
where Np is the number of coincidences that would be detected without the polarizers during the acquisition time.
We model the experimental photon pair state by a pure quantum state | Ψ D C ,10 
(4)
where DC stands for down-conversion and θl is set by the orientation of the half-waveplate ( θ l = 45 ° ideally). φ is the relative phase between states | H s | H i and | V s | V i generated in either crystal of the BBO pair and can be zeroed by adjusting the quartz compensator. The number of coincidences for this state is
(5)
where the parameter C takes into account experimental imperfections.
The analysis proceeds by introducing the polarization correlation coefficient
(6)
where α = α + 90 ° and β = β + 90 °. E ( α , β ) is obtained with four coincidence measurements and takes the extreme values of +1 and −1 for polarizations settings that are always parallel or always perpendicular ( N c = N p / 2 or Nc = 0 according to Eq. (3)).
Finally, Bell's parameter S is obtained by 16 coincidence measurements and is defined by
(7)
For the Bell state | Ψ EPR , ( θ l = 45 ° and φ = 0) with a = 45 ° , a = 0 ° , b = 22.5 °, and b = 22.5 °, Bell's parameter reaches the maximal theoretical value S max = 2 2. In contrast, local-realistic theories only allow non-entangled photons pairs (factorizable states) for which the maximum value of S is 2.

A typical sequence of measurements is given in Table II of  Appendix A. Since each of the four correlation coefficients of Eq. (6) requires four coincidence measurements, 16 values are measured for NA, NB, and Nc. The acquisition time for each polarization configuration is T acq = 10 s. The total number of involved photons pairs was found from the sum of two additional measurements: N p = N c ( 0 ° , 0 ° ) + N c ( 90 ° , 90 ° ) = 12380 ± 111, from which we infer an incident pair rate ϕ p = 1238 ± 11 s 1. We compute S =2.590 or S =2.528 depending on whether or not we subtract the accidental coincidences.24 In practice, our students use the first value so we will do the same. To make sense of this raw number, we introduce the notion of error budget to our students, i.e., the estimation of the different sources of uncertainty in the measurement.

As described earlier, Bell's parameter S is computed from 16 individual measurements labelled by i { 1 16 }. Each measurement involves three random variables ( N i , α i , β i), where Ni is the actual number of coincidences recorded in the i th measurement and αi and βi are the actual polarizer orientations, which may differ from the desired value. We have motorized polarizer mounts (see below) but we don't use them in practical work with students because we think it diminishes the interest of hands-on experimentation. Thus, the main contributions to our error budget are the count rates and the accuracy of the orientations of the polarizers. To the best of our knowledge, only the counting uncertainty is usually taken into account. Photon pairs from spontaneous parametric downconversion follow a Poisson distribution,14 well approximated by a normal distribution for large numbers of pairs. We suppose, moreover, that the mean number of pairs involved in each measurement is constant and equal to Np. All Ni are then calculated from Eq. (3) with an associated standard deviation u ( N i ) = N i.

When uncertainty in the polarization orientation is included, it is usually assumed to follow a normal law centered on the nominal values (second and third columns of Table II in the  Appendix) with the same standard deviation u ( α i ) = u ( β i ) = δ θ, accounting for the experimental imperfections when rotating the polarizers, either by hand or with motorized rotation mounts.

A commonly used method to evaluate uncertainty on a given parameter is to compute its variance using the Gaussian error propagation formula
(8)
with θ i = β i α i and where cov ( θ i , θ j ) is the covariance between random variables θi and θj. This term can take into account the fact that angular random variables are not independent.
We first assume all measurements independent and, thus, discard the covariance term. u(S) is then given by
(9)
Computation by hand of such a large formula is a sowewhat tedious task (see  Appendix B) but easily carried out using computer algebra. We find that, for a perfect Bell state, the contribution of the coincidences alone to the variance on S is 2 / N p. The contribution of the polarizers' orientation errors is much more intricate as trigonometric functions appear both in the numerator and denominator in the definition of S. However, in the end, all sum up quite nicely, and we get an overall 6 δ θ 2 contribution to the variance. We, thus, finally get
(10)
With Np = 12380 and δ θ = 0.5 ° (careful manual setting of the polarizers) we find, with degrees converted into radians
(11)
Our students can safely conclude that Bell's inequality is strongly violated in their experiment.
We can then discuss on the most efficient way to do the experiment. The acquisition time Tacq is indeed the only adjustable parameter in the experiment because δ θ is set by the student's skill. Equation (10) can also be written as follows:
(12)
For arbitrary long acquisition times σ S 6 δ θ. However, in practice, it is no longer very profitable to integrate the signal once the two contributions are equal. For our incident pair rate ( 1238 s 1), this occurs for T acq 3.5 s. We commonly choose Tacq = 10 s so that the manipulation lasts a reasonable time. The two contributions to the variance of S are then, respectively, 1.6 × 10 4 and 4.6 × 10 4. The counting error is much smaller than the angular error, and we are almost at the limit of the performance of the experiment. To improve it we need to reduce δ θ by implementing more precise angular settings using vernier or motorized mounts. For lower count rates such an investment could be pointless, which shows the usefulness of making an error budget and performing this uncertainty analysis before starting the experiment.

However, the above uncertainty analysis does not correctly model real experiments in which only one and not both polarizers are normally rotated between acquisitions. While the Ni can still be considered as independent variables, αi and βi are no longer independent, as one may have in practice α i + 1 = α i or β i + 1 = β i. The analytical Gaussian error propagation formula above, which assumes implicitly that both angles are reset for each measurement, can, thus, be considered as a worst case study. We will introduce a numerical approach to deal with such real experimental conditions.

In order to better determine the statistical uncertainty of Bell's parameter, we implemented a Monte Carlo algorithm.20,21 We first benchmarked our code against the analytical formula (Eq. (10)) using what we call in the following the standard sequence where both polarizers are reset for each of the 16 measurements. The process for estimating the statistical uncertainty for Bell's parameter is as follows:

  1. Assign a random variable to each input parameter: in our case, the 16 triplets ( N i , α i , β i ). The mean number of incident photons pairs Np being known, the random variable associated to Ni follows a Poisson distribution with a mean value given either in Eq. 3 (true Bell state) or in Eq. 5 (non-ideal Bell state). The angular errors are generated with a Gaussian distribution of width δ θ.

  2. Calculate the associated Bell's parameter.

  3. Repeat the above procedure Nruns times.

  4. The uncertainty u(S) is then identified as the standard deviation σS of Bell's parameter calculated on the statistical ensemble.

We show in Fig. 2 the evolution of the standard deviation of Bell's parameter σS as a function of Nruns for different values of the polarizer orientation uncertainty δ θ using the true Bell state. We have chosen a large number of photons pairs Np = 10 000 so that only the angular error contributes significantly to σS. This figure illustrates that convergence of the Monte Carlo simulations is obtained typically for Nruns = 1000, whatever the chosen value of δ θ.

Figure 3(a) shows how this calculated standard deviation depends on both the mean total count number Np and the polarizers' angular uncertainty δ θ.

For perfect angular setting of the polarizers, δ θ = 0 ° (purple solid line), σS reduces to the counting error with a slight deviation from the normal value ( 2 / N p ) 1 / 2 (dashed blue line) at low counts where Poisson and Gauss distributions actually differ.

Conversely, σS reaches a nonzero asymptotic value for large Np depending on the angular uncertainty δ θ. A linear fit (the inset Fig. 3(b)) gives σ S 2.448 δ θ. The slope is not significantly different from 6 2.449 obtained with our analytical analysis.

These asymptotic behaviours establish the consistency of our analytical and numerical approaches. We can now use our numerical simulations in more realistic cases, where the experimenter may rotate a single polarizer from one measurement to another. As stated before, this operating mode induces correlations between the αi's and between the βi's that we don't know how to take into account analytically.

Intuitively, we may expect a reduced uncertainty in S by lowering the number of interventions of the experimenter. The three sequences shown in Fig. 4 are ones that minimize the total number of rotations for polarizers A and B. These have only NRot = 17 independent αi's and βi's instead of 32: for the first measurement, the experimenter sets both polarizers and then, for the remaining 15 ones, only one polarizer (A or B) is rotated each time. In the following, we call them short sequences.

Numerical implementation of such realistic sequences is straightforward. For the first measurement, both angles are randomly chosen. Then, from one measurement to the next, we choose a new random value of either α or β according to the specified sequence.

We compare in Fig. 5(a), the standard deviation of Bell's parameter for the standard measurement and the short sequences “Snake” and “Friezes 1 and 2” depicted in Fig. 4.

First, we observe that, globally, the short sequences give significantly better results, i.e., lower asymptotic uncertainty on Bell's parameter.

Second, we notice that the shortest sequences are not all equally effective (Fig. 5(b)) with Snake performing almost 20% better than Frieze 2. This is unexpected: it is not only the total number of rotations NRot that matters but also the order in which they are performed. Correlations of angular settings of the polarizers impact Bell's parameter in a quite intricate way.

Our inability to explain why some short sequences are better than others implies that there might be better measurement sequences than the ones considered thus far. To address this issue, an optimization algorithm is needed since the total number of possible protocols is 16 ! 2 × 10 13. As we cannot perform an extensive exploration of the whole sequences space, we have chosen to use a genetic algorithm technique.22 

First, we label the 16 different polarizers' settings as shown in Fig. 6. The visual representation of the experimental sequences as a path visiting each square of the checkerboard (Fig. 4) is equivalent to a single permutation of the sequence [ 1 , , 16 ] more suitable for computer handling.

Accordingly, the “Snake” and the “Frieze” are then encoded as the following permutations:

  • Snake: [ 1 2 3 4 8 7 6 5 9 10 11 12 16 15 14 13 ]

  • Frieze 1: [ 1 2 3 4 8 12 16 15 14 13 9 5 6 7 11 10 ]

  • Frieze 2: [ 1 2 3 4 8 12 16 15 14 10 11 7 6 5 9 13 ]

We have chosen the following parameters for our genetic optimization process:

  • Np = 100 000 incoming pairs so that the standard deviation of Bell's parameter depends essentially on the angular uncertainty of the polarizers' orientations set to δ θ = 1 °.

  • A true Bell state is considered for which the number of coincidences for each measurement is given in Eq. (3): N i = ( N p / 2 ) cos 2 ( β i α i ).

  • The population is formed by 1000 sequences randomly chosen for the first generation (“parents”). We let the population evolve and select the 1000 “children” with lowest uncertainty on Bell's parameter for the next generation.22 

  • For each “generation,” Nruns = 2000 iterations are computed for each “individual” in order to ensure the convergence of the Monte-Carlo simulation.

  • The genetic algorithm is used without any constraints: there are no limits on the number of polarizers' rotations (it is not restricted to one at each step), and there are no conditions between the first and the last position of polarizers.

Figure 7 shows an example of the optimization process provided by the genetic algorithm. For each generation, we display the average value and the lowest value of the uncertainty in Bell's parameter over the whole population. The efficiency of the algorithm is clearly shown by its convergence. The best reached value σ S 0.014 is almost half of the best one obtained for the simple sequences above (see Fig. 8). One quasi optimal sequence is the following: [ 11 7 5 8 12 9 13 14 10 6 2 3 15 16 4 1 ]. It is quite non-intuitive with no particular pattern despite the high symmetry of Bell's parameter definition (Fig. 9).

Consequently, in the budget analysis framework presented before, for a given uncertainty level on S, our best sequence performs four times faster with δ θ = 1 °. The prefactor 6 of δ θ 2 in the error budget Eq. (10) can then be thought as a complicated function of the actual experimental sequence. The optimization we performed corresponds to the minimization of this function. We may assume that the reason why some sequences result in lower error is the compensation between uncertainties, as the angle settings are not independent. A discussion of the covariance terms in Eq. (8) is proposed in  Appendix C. Even if they are not at the level of prediction of Monte Carlo simulations, taking into account covariances between angular random variables in the Gaussian propagation error formula shows error compensations and illustrates the fact than sequences having the same number of polarizers rotations do not have the same uncertainty on S.

We were so surprised by such a significant yet counter-intuitive improvement in the protocol that we thought it should be verified experimentally.

Comparing two sequences is, however, quite a challenging task. Indeed, we need a reliable value not of Bell's parameter itself but of its uncertainty. We, thus, have to repeat the whole set of 16 measurements a sufficient number of times, say 100 for each sequence. Moreover, the most important parameter is the angular uncertainty, unknown to the experimenter who would have to keep it constant over several thousands of angular settings. This is clearly not humanly feasible.

We, thus, automated the measurement of Bell's parameter with computer-controlled motorized rotation mounts for the polarizers. We programmed different measurement sequences, adding random errors in the angles to simulate human setting. These errors follow a normal law whose dispersion δ θ can be set to any value equal to or greater than the 0.1 ° repeatability of our mounts.

In order to ensure that we were observing the asymptotic behavior at large Np in Fig. 8, we set the integration time long enough so that N p > 10 4. We compared the “snake” and the “optimal” sequences for δ θ = 0 ± 0.1 °, which a human cannot do, δ θ = 0.5 ± 0.1 °, a good experimenter, and δ θ = 1 ± 0.1 °. We ran the whole set of measurements typically 100 times for each δ θ (see Table I).

For δ θ = 0 ± 0.1 °, both configurations give experimentally the same standard deviation as only the photon number statistics is involved. However, for either δ θ = 0.5 ± 0.1 ° or δ θ = ± 0.1 °, the “optimal” sequence has a reduced standard deviation of Bell's parameter as compared to the “snake” sequence. As predicted by the genetic algorithm, the optimal sequence does actually perform better.

It should also be noted that experimental values for the statistical uncertainty are quite close to those provided by the Monte Carlo models of Fig. 8, despite the fact that the experimental state we have created is not pure and, therefore, does not match the perfect Bell state used in these calculations. This means that the proposed optimal sequence is somewhat robust with respect to the experimental imperfections.

Because the coincidence detection circuit provides count numbers each 100 ms,16,17 it is possible to post-process the data in order to show how the standard deviation of Bell's parameter converges to its asymptotic value as the number of detected pairs increases in much the same way as is easily done numerically (Fig. 10).

Again, for δ θ = 0 °, both sequences present the same behavior with a monotonic square root decrease in the uncertainty on S with the total number of detected pairs (not shown). More interestingly, as predicted by our Monte Carlo simulations, we observe that the experimental σS settles asymptotically to a finite value for δ θ = 0.5 ° and δ θ = 1 °.

This experimentally demonstrates that there are protocols that are inherently more robust in dealing with handling errors by taking advantage of the correlations that exist in the definition of Bell's parameter.

Our initial motivation to understand and quantify uncertainty in Bell's parameter S was to quantify the significance level and to increase the strength of the violation of Bell's inequality. However, during a more advanced student project, we uncovered a subtle and unexpected influence of the sequence of measurements performed on the measurement uncertainty. This allowed us to significantly improve the performance of our setup by optimizing the experimental protocol. This discovery may have implications for other experiments, even in completely different areas, as it often happens that an experimental result combines several individual measurements. We encourage readers to consider other examples where this effect may be seen.

The authors have no conflicts to disclose.

Below we provide a typical sequence of measurements for the determination of Bell's parameter. Each of the 16 values provided for NA, NB, and Nc is integrated over T acq = 10 s. The number of concidences Nc provided here is not corrected for accidental coincidences. Within this experiment, the rate of accidental coincidences ϕ acc is about 8 s 1.

According to Table II and Eq. (7), Bell's parameter can be written as
(B1)
By considering a true Bell state | Ψ EPR = ( | H s | H i + | V s | V i ) / 2, the number of coincidences Ni is given by N i = ( N p / 2 ) cos 2 ( β i α i ) = ( N p / 2 ) cos 2 θ i, Np being the number of incident pairs. It is then possible from this expression and Eq. (7) to calculate analytically all the partial derivatives needed to compute the uncertainty of Bell's parameter in the following expression:
(B2)
These derivatives will include the terms cos 2 ( β i α i ). In Table III, we show the value of θ i = β i α i for each measurement. There are only two values of cos 2 θ i: C p = ( 2 + 2 ) / 4 and C m = ( 2 2 ) / 4.
1. Coincidence counts contribution
In evaluating the first term in the summation in Eq. (B2), it is helpful to start by considering only the measurements i = 1 , , 4, for which the partial derivatives are
(B3)
(B4)
From these expressions, we can write
(B5)
By symmetry, the same expression is obtained for i = 5 , , 8 , i = 9 , , 12 and i = 13 , , 16, resulting in
(B6)
The contribution of the detected coincidences to the variance is 2 / N p.
2. Angular contribution
In evaluating the second and third terms in the summation in Eq. (B2), the following expressions are helpful:
(B7)
(B8)
(B9)
From these expressions, we can write
For i = 1 , , 4, we have
Making use of Eq. (B9), this becomes
Since sin 2 θ 1 = ( 2 2 ) / 4 = C m and sin 2 θ 3 = ( 2 + 2 ) / 4 = C p, we obtain
(B10)
Again, by symmetry, all the angular uncertainties contribute the same so we get
(B11)
The contribution of the error on polarizers' orientation to the variance is 6 δ θ 2.
Hence, the final expression for the uncertainty on Bell's parameter is
(B12)
In the expression for the coincidences N i = ( N p / 2 ) cos 2 ( β i α i ) = ( N p / 2 ) cos 2 ( θ i ), the random variables θi are not independent of each other if the polarizers are not re-orientated for each measurement. An initial attempt to account for this dependence uses the enhanced Gaussian error propagation formula
(C1)
where cov ( θ i , θ j ) is the covariance between random variables θi and θj. The first-order partial derivatives can be positive or negative, allowing some sequences to lower u(S).

With θ i = β i α i, we have u ( θ i ) 2 = 2 δ θ 2 and cov ( θ i , θ j ) = δ θ 2 if the measurements i and j have a common angle. Using the formulas of the partial derivatives given in Appendix B 2, we obtain u(S) from Eq. (C1) and can compare with Monte Carlo simulations. Results are given in Table IV.

Compared to the value u ( S ) = 0.043 (Eq. (10) with N p = 10 5 and δ θ = 1 °), a reduction of the uncertainty is effectively observed when covariances are taken into account (negative partial derivatives) but not at the level of the prediction of Monte Carlo simulations or observed in experiments, especially for the optimal sequence. Uncertainty compensations may be more complicated than those of Eq. (C1) due to the nonlinear dependence of σS on the angles.

We developed an additional way to consider the uncertainty at each measurement of a sequence. We first generate values of Ni for an ideal and deterministic measurement using a particular sequence ( N i = ( N p / 2 ) cos 2 ( β i α i ) and N p = 10 5), resulting in S = 2 2. Then we successively replace each value of Ni with a new value calculated when the angles include Gaussian-distributed errors, δ θ = 1 ° and re-calculate the value of S(i) at each measurement. We repeated this process, averaging together the values of S(i) obtained at each step until the convergence of the standard deviation σ S ( i ) of the statistical distribution of S(i). For a given sequence, σ S ( 16 ) is equal to its standard deviation σS.

Results are shown in Fig. 11 for the following sequences: the “Snake,” the “Friezes 1 and 2,” the optimal given by the genetic algorithm, and a sequence called “1:16” which corresponds to the sequence [ 1 2 16 ] where we have added a new Gaussian-distributed error to the angle at each step, when a polarizer is rotated.

The evolution of σ S ( i ) at each measurement is shown. The unexpected behaviour is for the optimal sequence, the uncertainty in Bell's parameter stays stable after the 4 t h measurement, implying that error compensation occurs at each polarizer rotation after that point. For other sequences, they keep increasing toward to their value given in Table IV.

1.
E. J.
Galvez
and
M.
Beck
, “
Quantum optics experiments with single photons for undergraduate laboratories
,”
Proc. SPIE
9665
,
966513
(
2007
).
2.
E. J.
Galvez
, “
Resource Letter SPE-1: Single-photon experiments in the undergraduate laboratory
,”
Am. J. Phys.
82
,
1018
1028
(
2014
).
3.
J. J.
Thorn
,
M. S.
Neel
,
V. W.
Donato
,
G. S.
Bergreen
,
R. E.
Davies
, and
M.
Beck
, “
Observing the quantum behavior of light in an undergraduate laboratory
,”
Am. J. Phys.
72
,
1210
1219
(
2004
).
4.
J. A.
Carlson
,
M. D.
Olmstead
, and
M.
Beck
, “
Quantum mysteries tested: An experiment implementing Hardy's test of local realism
,”
Am. J. Phys.
74
,
180
186
(
2006
).
5.
J.
Brody
and
C.
Selton
, “
Quantum entanglement with Freedman's inequality
,”
Am. J. Phys.
86
,
412
416
(
2018
).
6.
D.
Browne
,
S.
Bose
,
F.
Mintert
, and
M. S.
Kim
, “
From quantum optics to quantum technologies
,”
Prog. Quantum Electron.
54
,
2
18
(
2017
).
7.
J. S.
Bell
, “
On the Einstein–Podolsky–Rosen paradox
,”
Phys. (Long Island City, N.Y.)
1
,
195
200
(
1964
).
8.
J. F.
Clauser
,
M. A.
Horne
,
A.
Shimony
, and
R. A.
Holt
, “
Proposed experiment to test local hidden-variable theories
,”
Phys. Rev. Lett.
23
(
15
),
880
884
(
1969
).
9.
The Nobel Prize in Physics
2022
.
NobelPrize.org. Nobel Prize Outreach AB 2022
. Wed. 26 Oct 2022, https://www.nobelprize.org/prizes/physics/2022/summary/>.
10.
D.
Dehlinger
and
M. W.
Mitchell
, “
Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory
,”
Am. J. Phys.
70
,
903
910
(
2002
).
11.
D.
Dehlinger
and
M. W.
Mitchell
, “
Entangled photon apparatus for the undergraduate laboratory
,”
Am. J. Phys.
70
,
898
902
(
2002
).
12.
Turnkey kits are now commercially available, for example, from QuTools: <http://www.qutools.com/qued/> or from Qubitekk: <https://qubitekk.com/products/quantum-mechanics-lab-kit/>.
13.
S.
Meraner
,
R.
Chapman
,
S.
Frick
,
R.
Keil
,
M.
Prilmüller
, and
G.
Weihs
, “
Approaching the Tsirelson bound with a Sagnac source of polarization-entangled photons
,”
SciPost Phys.
10
,
17
35
(
2021
).
14.
T.
Larchuk
,
M.
Teich
, and
B.
Saleh
, “
Statistics of entangled-photon coincidences in parametric downconversion
,”
Ann. New York Acad. Sci.
755
,
680
686
(
1995
).
15.
P. G.
Kwiat
,
E.
Waks
,
A. G.
White
,
I.
Appelbaum
, and
P. H.
Eberhard
, “
Ultrabright source of polarization-entangled photons
,”
Phys. Rev. A.
60
,
R773
R776
(
1999
).
16.
D.
Branning
,
S.
Bhandari
, and
M.
Beck
, “
Low-cost coincidence-counting electronics for undergraduate quantum optics
,”
Am. J. Phys.
77
,
667
670
(
2009
).
17.
D.
Branning
and
M.
Beck
, “
An FPGA-based module for multiphoton coincidence counting
,”
Proc. SPIE
8375
,
83750F1-10
(
2012
).
18.
B. J.
Pearson
and
D. P.
Jackson
, “
A hands-on introduction to single photons and quantum mechanics for undergraduates
,”
Am. J. Phys.
78
,
471
484
(
2010
).
19.
W. J.
Munro
,
D. F. V.
James
,
A. G.
White
, and
P. G.
Kwiat
, “
Maximizing the entanglement of two mixed qubits
,”
Phys. Rev. A
64
,
030302
(
2001
).
20.
M. G.
Cox
and
B. R. L.
Siebert
, “
The use of a Monte Carlo method for evaluating uncertainty and expanded uncertainty
,”
Metrologia
43
,
178
188
(
2006
).
21.
Joint Committee for Guides in Metrology, JCGM 101
:
2008
, “
Evaluation of measurement data-Supplement 1 to the ‘Guide to the expression of uncertainty in measurement’—Propagation of distributions using a Monte Carlo method
.”
22.
D. A.
Coley
,
An Introduction to Genetic Algorithms for Scientists and Engineers
(
World Scientific Publishing
,
Singapore
,
1999
).
23.
As usual when measuring coincidence rates, the non-perfect quantum efficiency of the detection system is irrelevant as long as it is random and independent of the photon polarization which is the case here. We can, therefore, assume that all pairs transmitted by both polarizers are detected, or alternatively that undetected photon pairs were not even generated.
24.
By chance, two uncorrelated counts from noise or background radiation may be detected in the same temporal window giving a so-called accidental coincidence rate ϕ acc = τ c ϕ A ϕ B (Ref. 18).