We describe a simple multivariate technique of likelihood ratios for improved discrimination of signal and background in multi-dimensional quantum target detection. The technique combines two independent variables, time difference and summed energy, of a photon pair from the spontaneous parametric downconversion source into an optimal discriminant. The discriminant performance was studied using experimental data and Monte Carlo modeling with clear improvement shown compared to previous techniques. As novel detectors become available, we expect this type of multivariate analysis to become increasingly important in multi-dimensional quantum optics.
Non-classical correlation is at the heart of a range of quantum-enhanced technologies.1–4 In quantum optics, correlated photon pairs are routinely produced using the workhorse spontaneous parametric downconversion (SPDC) sources. These sources have been used to generate pairs of photons that are correlated in almost every imaginable degree of freedom (DOF), including time,5 polarization,6 position-momentum,7 orbital angular momentum (OAM),8 or frequency.9 The polarization degree of freedom naturally lends itself to quantum information theory; indeed, much of the seminal work in the field employed polarization-entangled photon pairs.10–13 However, polarization is by nature only two dimensional, and so each photon can carry only a single bit of information. Other DOFs are, in principle, unbound, offering high-dimensional encoding. But while these high-dimensional states offer great promise, measuring them efficiently remains a significant challenge. Traditional avalanche single photon detectors offer excellent temporal resolution but are single-mode and so require scanning techniques to measure continuous DOFs.7,14–17 Alternatively, single photon sensitive cameras can be employed,18–21 but they suffer from a low frame rate, making continuous readout with good temporal resolution impossible.
The Tpx3Cam is an optical camera based on a technology originating in the high-energy physics community, which has been adapted for optical detection by bonding a fast readout chip to an optical sensor.22 The resulting spatial resolution is comparable to intensified CCD or electron-multiplying CCD (EMCCD) cameras, but with a so-called data-driven readout, only pixels in which the readout exceeds a threshold are read out allowing continuous operation and efficient time stamping with nanosecond resolution. By appending an image intensifier, the Tpx3Cam can be made to detect single photons, bringing a paradigm shift in quantum imaging devices. We expect this sensor to have applications in a range of quantum and quantum-inspired sensing techniques including ghost-imaging23–25 and quantum illumination.26,27
In a recent paper,28 we applied the Tpx3Cam to quantum target detection—a simplified form of quantum illumination that does not require entanglement from a photon pair source, only correlation. Pairs of photons were generated by SPDC with one photon from each pair (the “herald”) measured locally and the other (the “signal”) sent to a target, which is hidden in a large amount of background light. After interacting with the target, correlations between the scattered signal photons and the herald photons are measured. This technique provides improved background rejection compared to simply measuring the back-scattered signal because the signal and herald modes are perfectly correlated, whereas the background is uncorrelated.26,29,30
In previous work employing only timing correlations,30 a peak in the one-dimensional histogram of photon arrival times reveals the presence of a target in the signal beam and the timing delay of the peak gives the distance to the target.31 The target can be said to be “detected” if the size of the peak exceeds its statistical fluctuations by a predetermined amount (e.g., two-sigma). In photon-counting experiments, the statistical fluctuations scale as , where n is the number of detected photons. So one must transmit enough photons to achieve the desired detection confidence. By exploiting the multidimensional capabilities of the Tpx3Cam as a two-photon spectrometer, it was possible to simultaneously measure frequency and time correlations, allowing us to generate a two-dimensional histogram of photon arrival time difference and frequency sum [see Fig. 2(a)]. Because of the two-dimensional nature of the correlation, the background, i.e., accidental coincidences, is greatly reduced, resulting in an increased detection confidence. Or, putting in another way, the same detection confidence can be achieved by sending fewer photons.
In our recent work,28 the multi-variable correlations were analyzed in a simple manner with a temporal “coincidence window” used to isolate pairs of photons that arrive with the correct time separation, and then, subsequently, a spectral cut was used select appropriate frequency correlations. In this work, we show that the background can be further reduced, and therefore, detection confidence increased, by applying a multivariate, or combined, discriminant. Optimal discrimination is widely used in particle physics32 and other fields,33,34 and in this work, it has been applied to quantum optics. As the Tpx3Cam, and other readout-driven cameras,35–37 becomes more prevalent in quantum optics, we expect this type of analysis to become increasingly important beyond quantum target detection. Furthermore, it is simple to extend this analysis to higher dimensions, for example, to analyze multi-variable hyper-entangled states.38,39
Here, we use one of the most straightforward multivariate techniques, likelihood ratio,32,40–42 to combine the time difference and photon energy into a single discriminant. It can be shown that this combination is optimal, which means that the resulting discriminating variable provides the best possible background suppression at a given selection efficiency.43,44 For the below discussion, it is also important that the two variables, time and energy, are independent, i.e., the distribution of one is independent of any selection on the other.45 We would also like to emphasize that the experimental accuracy of the presented time-energy measurements is some orders of magnitude beyond the reach of time-energy entanglement effects, and therefore, we do not consider them.
Let us assume that there are n variables, which have different distributions for signal and background. For independent variables, the discriminant can be written as a product of ratios,40
where Yi is the ratio of probability density functions for signal, fS, and background, fB. The above procedure is very simple and generalizes to any number of discriminating variables.
The approach described above requires knowledge of the signal and background distributions for the variables, which are used to form the discriminant. These distributions are measured experimentally and modeled using Monte Carlo (MC) simulations. The MC simulations allow us to test the discriminant and evaluate performance in different regimes that were not investigated experimentally (see the supplementary material).
The experimental setup used for the measurements is shown schematically in Fig. 1 and is described in detail elsewhere.28 Briefly, an SPDC source is employed to produce pairs of photons with the wavelength centered around 810 nm. One of the photons (signal) is sent onto a target and subsequently collected with a small telescope, while the other (herald) photon is sent directly to the camera. Before entering the fast camera, the two photons are dispersed spectroscopically with a diffractive grating. The target is obscured by broadband “jamming” light from a halogen lamp introduced from behind the target.
The fast camera, Tpx3Cam, is based on a Timepix3 chip46 with a timing resolution of 1.5 ns coupled to an optical sensor.47,48 The data obtained from the camera consist of x and y positions of hit pixels, ToA (Time of Arrival) and ToT (Time over Threshold) of the signal. The latter specifies deposited energy within the pixel. In order to achieve single photon sensitivity, an intensifier is employed, converting single photons into flashes of light, which are registered by the camera. The Hi-QE Red intensifier from Photonis49 has a quantum efficiency of about 20% at 810 nm and employs a P47 fast scintillator, which has timing performance compatible with nanosecond scale resolution.50 All 256 × 256 pixels of the camera function independently with a low dead time and can be read out with a maximum total rate of about 10M photons per second.51,52 Similar configurations of the intensified Tpx3Cam have been used recently for characterization of quantum networks53,54 and photon counting.55
The raw data are post-processed to identify “clusters,” collections of pixels each corresponding to a single photon, and to perform centroiding. Centroiding improves the spatial resolution using a profile of the deposited energy in the cluster. We also apply a ToT-based correction to remove the time-walk effect in ToA and to further improve the time resolution. The post-processing steps are discussed in detail elsewhere.53,56
The inset of Fig. 1 shows the measured data as a two-dimensional distribution of pixel occupancy of the camera data. The signal and herald modes after the diffractive grating appear as two horizontal stripes, while the uniform background is mostly due to the intensifier dark counts and remaining stray light. In the spectrometer, the photon wavelength has a linear relationship to the position along the stripe, which can be derived by a simple calibration procedure.28 The downconversion process in the crystal requires conservation of energy, and therefore,
where is the wavelength of the pump photon from the laser, 405 nm, and is the wavelength of the herald (signal) photon. The spectral resolution is different for the herald and signal photons due to different types of multi-mode fibers used for their collection28 and is measured to be 1.6 and 3.2 pixels, respectively, for the herald and signal photons. The pump laser has a full-width half maximum linewidth of .
To get the number of time coincidences for the photon pairs, we employed a previously used algorithm.28 The data are selected according to the regions of interest, the two stripes, and for every event in one stripe, an event with the smallest () is found in the other stripe. The two-dimensional distribution of the sum energy and time difference of the photon pairs in the data are shown in Fig. 2(a). The sum energy, expressed through the pump photon wavelength, can be described by a normal distribution of width 0.36 nm due to a combination of the pump laser linewidth and spectrometer resolution. The time coincidence peak is also a normal distribution of width of 7.55 ns due to the temporal resolution of the camera.
To study the signal and background separation using the likelihood ratio discriminant, we developed a MC model corresponding to experimental conditions such as signal and background resolutions and rates, including various inefficiencies of the whole system. More details on the model and its matching to the dataset are given in the supplementary material.
Next, we applied the aforementioned coincidence algorithm to find pairs of photons. It blindly processes the MC sample and determines which events are paired based on the closest ToA. We can then plot one-dimensional histograms of the time difference distribution and the sum energy, represented by pump energy , as plotted in Figs. 3(a) and 3(b), respectively. The MC simulations are in good agreement with the measured data. Note that each photon's origin was tagged in the MC to track them when forming photon pairs in the coincidence algorithm. This allows us to unambiguously find true signal coincidence events (brown color in Fig. 3) and identify different types of background events (shades of green). This is a useful feature of the MC simulation, which is unavailable in experimental data. Figure 3 also illustrates very well that, before any selections are made, the signal to background ratio (SBR) is very poor.
The same data can be plotted in the two-dimensional representation shown in Fig. 2 for both data and MC. The bright spot in the center of the plot is due to true coincidences between photons produced in a pair, which are highly correlated in time and anti-correlated in wavelength. The background is due to uncorrelated events such as photon-background or background–background coincidences. It is easy to see that signal-to-background contrast is far higher in the 2D representation than in either of the 1D histograms. Indeed, Figs. 2 and 3 show a good visual representation of the difference between the “box-cuts” applied in our previous work and the combined discriminant employed here. In the previous work, a region of interest was defined first in one degree of freedom (time) and then the other (energy), which is equivalent to selecting the peaks in one-dimensional histograms. Here, instead, the combined discriminant combines both variables when defining a region of interest effectively selecting an ellipse around the peak of the two-dimensional histogram. This idea is explored more rigorously below.
Above, we studied in detail two discriminating variables, one derived from the temporal measurements and the other one derived from the spectroscopic measurements. We emphasize that this information is available on the pair by pair basis and, therefore, can be combined individually for each registered pair. The advantage of the utilized fast camera is in its ability to simultaneously record both: spatial coordinates and temporal information for each photon.
The combined discriminant that we define is a function with two inputs: the photon pair time difference and wavelength of the reconstructed pump photon . To combine and , we start by defining background to signal ratios for these two variables, and , as
where the probability density functions for the true coincidences [denominator of Eqs. (3) and (4)] are assumed to be Gaussian in spectrum and time. The probability density function of the background noise numerators (3 and 4) drop linearly and exponentially for the spectrum and time, respectively. These are empirical models with all parameters tuned to fit experimental data (see Fig. 3 and supplementary material Fig. S2).
Combining the ratios and according to (1),
yields the two-dimensional likelihood ratio function Y with the result shown in Fig. 4 with a deep, well-defined minimum of the function in the center.
With the two-dimensional likelihood ratio calculated, we can proceed to use it to process the data. The selection criteria to eliminate background will be determined by drawing a Y-isoline around the peak in the 2D histogram; events outside this isoline are rejected, and those inside are retained. Appropriate selection of the isoline is important; if it is too large, then too many background events are included, reducing the SBR. If isoline is too small, the SBR will be high, but signal events will be discarded along with the background. This trade-off is illustrated in Fig. 5, where the SBR is plotted as a function of the selection efficiency ηs. Here, we have defined the selection efficiency as the fraction of true coincidences that remain after selection. A figure of merit for this analysis is the SBR, defined simply as , where s is the signal counts and b is the background. Alternatively, we can consider the “sample purity” p, which is a commonly used metric in multivariate analysis and is defined as .
We tested the discriminating power of this newly obtained variable Y by comparing it with and performances on their own. We also analyzed the MC data using simple box-cuts where the temporal cut was fixed at ±10 ns and the spectral cut width was varied, as in Ref. 28.
In Fig. 5, the SBR, sample purity and selection efficiency are plotted for various selection parameters using different techniques: sum energy-only, time difference only, time-energy box cuts, and combined discriminant. In each case, as the selected region becomes smaller, the sample purity increases as more background is eliminated, but the selection efficiency decreases as true coincidences are also rejected. It is clear that the performance is vastly superior for the multivariate techniques compared to the single-variable approaches and that the combined discriminant outperforms the box-cut method. According to the MC simulations, for a constant selection efficiency of , the SBR is increased by 26% for the optimal discriminant. For comparison, we also apply a similar analysis to the experimental data, shown with dotted lines in Fig. 5. The MC/data agreement is within errors, with the same functional form, confirming better performance of the optimal discriminant.
For target detection, the SBR is not the only relevant parameter; one must also consider the ratio of the number of counts to the statistical fluctuations, hereafter referred to as the signal-to-noise ratio (SNR). Unlike the SBR, which is constant, the SNR increases with the longer integration time. Since the SNR includes statistical fluctuations in both the signal and the background, it is closely related to the SBR and an improvement in SBR will directly lead to an improvement in SNR.28 When using the optimal discriminant, an SBR improvement of 26% corresponds to an SNR improvement of 7%, for a given integration time. To put this in another way, if target detection is defined to occur at a particular SNR threshold, then it will require 13% fewer photons to detect a target using the optimal discriminant compared to the box cut.
In summary, we employed a recently developed fast camera, Tpx3Cam, in the context of quantum target detection and considered an optimal discriminant based on the likelihood ratios for two measured variables, energy and time. The optimal discriminant can be established in advance through modeling of the experiment, and so tagging can be performed online with no computational overhead. We achieved a 26% improvement of SBR for the same selection efficiency compared to the previously used selections.
The optimal discriminant is also more resource efficient than previous techniques, requiring 13% fewer photons to achieve a detection threshold. We believe that this multivariate approach is a promising avenue to analyze quantum sensing protocols using correlated photon pairs and, in general, high-dimensional quantum states. This analysis was performed on two-dimensional data but is easily extended to higher dimensions; we expect the improvements to be more pronounced when more variables are included. Finally, another opportunity for the future work is to extend the multivariate analysis to determination of the distance to the target using all available information in the data.
See the supplementary material for more information on the MC model and on the performance predictions for different resolutions and background rates.
The authors are grateful to Philip Bustard, Denis Guay, and Doug Moffatt for technical support and stimulating discussion. We acknowledge support from Defence Research and Development Canada (DRDC) and from the U.S. Department of Energy QuantISED award. S.F. acknowledges support under the Science Undergraduate Laboratory Internships Program (SULI) by the U.S. Department of Energy. This work was also supported by Grant No. LM2018109 of Ministry of Education, Youth and Sports as well as by Centre of Advanced Applied Sciences No. CZ.02.1.01/0.0/0.0/16-019/0000778, co-financed by the European Union.
The data that support the findings of this study are available from the corresponding author upon reasonable request.