Quantum sensing has a bright future for applications in need of impeccable sensitivities. The study of periodic fields has resulted in various techniques, which deal with the limited coherence time of the quantum sensor in several ways. However, the periodic signal to measure could include forms of randomness as well, such as changes in phase or in frequency. In such cases, long measurement times required to detect the smallest of field amplitudes hamper the effectiveness of conventional techniques. In this paper, we propose and explore a robust sensing technique to combat this problem. For the technique, instead of measuring the signal amplitude directly, we measure another global property of the signal, in this case the standard deviation. This results in a much-improved sensitivity. We analyze the advantages and limitations of this technique, and we demonstrate the working with a measurement using a nitrogen-vacancy center. This work encourages scouting measurements of alternative statistics.
I. INTRODUCTION
Owing to its sensitivity and potential high spatial resolution, quantum sensing has found applications in various fields. Given their excellent quantum properties even under ambient conditions and their atomic size,1 nitrogen-vacancy (N–V) centers are a promising option. Example applications are nuclear magnetic resonance,2 spin-wave measurements,3 measuring of neuronal action potentials,4 and imaging the hydrodynamic flow of current.5 Various techniques allow studying a variety of periodic signals, such as the classical Hahn-echo sequence for a synchronized measurement,6,7 multi-area sequences for high-dynamic range,8 Fourier transform for long-data frequency components,2,9 employing virtual transitions of periodically driven N–V centers for high-frequency signals,10 and fitting techniques for low-frequency signals.11 However, there are scenarios where none of these techniques are ideal.
One such scenario is when the signal of interest is periodic, while in this signal randomness manifests that potentially changes its direction or oscillation phase. This can be explored with a signal that has an arbitrarily changing phase. A technique such as Hahn-echo would, therefore, average over many phases, thus resulting in a decrease in the signal [see Figs. 1(a) and 1(b)]. An example of such a signal appears in the search for dark matter.12,13 Some of the dark matter candidates, e.g., the wave dark matter,14,15 have a finite coherence time and change velocity and phase at random over time. Since the effective amplitude of the dark matter signal is rather small, measurement times much longer than the time scale of such changes are required.
In the following, we present a measurement technique based on finding the standard deviation of the signal. Such a method is less affected by several kinds of randomness of the signal for its detection, thus increasing robustness, and requires little data memory. We analyze the technique in depth analytically and by numerical simulation to explore its properties. Finally, we perform example measurements using a single N–V center at room temperature.
II. RESULTS
A. Method
The std of data depends on their distribution. Hence, as long as the distribution does not change, the std remains the same. A change of the phase at random moments, or of the frequency, does not change the distribution, and thus, the std remains the same inherently. This is reflected in the independence of Eq. (3) on the phase and frequency. It should be noted that although the std seems independent of the offset as well, a gradual change in offset, opposed to changes in the frequency and phase, would change the average, thus actually affecting the std. However, the std of a finite signal with, e.g., random phase changes does become a random variable, with σy its expected value, since the result of Eq. (1) will vary depending on the precise data. In addition, for the sinusoidal of Eq. (2), computing the std for a finite signal could actually result in a slightly different expected value, but this change is negligible compared to other uncertainties relevant here when many periods are measured.
B. Noise
Without noise, the sensitivity of any method would be perfect. Therefore, we have to analyze the noise to determine the sensitivity of this method. Here, we assume that there is a white noise source with a normal distribution that has an average of 0 and an std σn. It should be noted that σn is a property of the noise, thus a constant. Figure 2(a) shows an example of data consisting of noise only. The std method’s principle does not change for other noise sources, but the sensitivities derived later might. Nonetheless, the noise model is a fair representation for many sources, such as counted photons. Moreover, when utilizing sufficiently large ensembles of sensors, the central limit theorem states that the resulting distribution will be normal.
C. Signal and noise
Several examples of the distribution of a sampled are shown in Fig. 2(c), where σy is varied while keeping σn constant.
D. Sensitivity
The idea is to sense the signal amplitude A by measuring the std of the data. As A is directly proportional to σy [see Eq. (3)], we explore detecting σy first. As a reminder, it should be noted that the symbols σn and σy are the actual stds of the noise and the signal, respectively, while and are the sampled values.
To verify the derivations above, we simulate a signal with random noise sufficient times to accurately find the distributions of the resulting uncertainty for the variable to measure, σy. In this numerical simulation, we randomly change the phase of the signal as if this phase has a half-life of ten periods. The results are shown in Fig. 2(d); they are consistent with Eqs. (7), (8), and (13) of the three explored regimes. The sensitivities for these simulations are shown in Fig. 2(e), showing consistency with Eq. (10).
For the case of N = 107, we vary the randomness of the phase. For a completely random phase, in which case, essentially a distribution is measured, there is still an optimum at about σy = σn, but the randomness of the signal limits the uncertainty immediately for stronger signals. When reducing the randomness, the flat area, where the sensitivity is the same as for a conventional method (only if there is no randomness), increases in size.
For the sake of comparison, we simulate the conventional Hahn-echo technique and another method,11 as it shows some interesting points. As the accumulated result needs to be adapted for the randomness, we scale the resulting simulated distribution of the measured to have the expected average σy, so that we get a fair estimate of the uncertainty. This does not work for the high-noise regime, as simply the uncertainty of the noise would be scaled to the signal value, thus giving an incorrect estimate. In Fig. 2(d), the resulting uncertainty shows to be independent of N. The reason is that an increase in N, on the one hand, reduces the averaged noise, while on the other hand, it increases the accumulated randomness of the signal; these effects compensate each other exactly. Hence, in Fig. 2(e), it is shown that the longer the measurement time (larger N), the better the relative sensitivity of the std method compared to a conventional method becomes. Therefore, the number of orders of magnitude improvement strongly depends on the level of randomness and the measurement time.
E. Sensitivity for N–V centers
Here, we will explore the sensitivity of measuring a magnetic field with a single N–V center. The measured signal stems from photons emitted by the N–V center, whose intensity depends on the spin state. When performing a measurement, here a free-induction decay (FID) sequence 7,20 [see Fig. 3(a)], the magnetic field of each data point follows from the gradient, as shown in Fig. 3(b). This gradient depends on the parameters of the N–V center and the chosen interaction delay [τ in Fig. 3(a)].17
Owing to photon counting, here, the relevant noise source is shot-noise.21 In the linear regime, the relation between this shot-noise std and the std of the output noise is derived from the gradient, as shown in Fig. 3(b). Outside the linear regime, the std of the intensity needs to be converted taking the sinusoidal shape resulting from the rotational symmetry of the spin into account, as explored in Appendix B.
Since each data point follows essentially from integrating the magnetic field between the microwave pulses, if the randomness is so extreme that the phase flips during this time as well, the sensitivity becomes severely worse. Thus, practically, the std method has a limit on the frequency of randomness it can handle.
F. Frequency
It is possible to find an estimate for the frequency of the measured signal via the std method. When averaging over groups of M data points, and computing the std of the resulting signal, the std will change toward zero. For example, the original signal of Fig. 1(c) has a data point each 1.75 ms and its std is given by the blue dot in Fig. 4(a). After averaging over two data points (M = 2), which has an “averaged time” of 3.5 ms, as shown by the inset of Fig. 4(a), the std decreases, as indicated by the red cross shown in Fig. 4(a). The reason for the decrease is twofold: the noise is averaged, thus , and averaging a sinusoidal decreases its amplitude toward zero eventually, thus reducing its std [see Eq. (3), which is explored in Appendix C].
Therefore, by computing the std for a range of M, minima appear when the averaged time is around an integer multiple of the period of the signal. Hence, by locating these zeros, or more precisely, by fitting the result to Eq. (C4), it is possible to find the frequency. It should be noted that opposed to the amplitude detection, the data points cannot be fully mixed (some temporal structure should remain), and of course, the frequency should not change drastically. Thus, the more randomness, the less accurately the frequency can be found by any method.
G. Measurements
The sample measured is epitaxially grown onto an Ib-type (111)-oriented diamond substrate by microwave plasma-assisted chemical-vapor deposition with enriched 12C (99.99%) and with an estimated phosphorus concentration of 1 × 1015 atoms cm−3.17,22,23 We use a standard in-house built confocal microscope to target individual electron spins residing in single N–V centers. Via a thin copper wire, microwave (MW) pulses are applied. Since the nitrogen atom causes hyperfine splitting of the ms = ±1 states, each energy difference is addressed with a separate MW source to ensure maximum contrast in our measurements. Magnetic fields are induced with a coil near the sample. All the experiments are conducted under ambient conditions.
A single N–V center with an inhomogeneous dephasing time ms is used for the experiment. We use an interaction delay of τ = 0.4 ms, which is optimal for dc measurements 17 (comments on the optimum follow in the discussion). To collect sufficient data within our 1 s measurement window (which, including overhead, sets N = 2361), and to demonstrate a way to limit σn, we accumulate photons for 5000 iterations of this 1 s window. Since photon shot-noise is the main source of noise, this sets σn to , with Nphotons being the number of photons read out per FID subsequence.17 In order to study the effect of signal randomness, we choose a frequency of 100 Hz, which allows us to change its phase over several periods. The amplitude of the applied magnetic field is 12 nT. With σn ≈ 5 nT (see Fig. 3), we have σn ≈ σy, which behaves as the center regime [see Fig. 2(d)].
The results for a regular signal (no randomness) are shown in Figs. 5(a) and 5(b). It finds a field amplitude of nT, and an estimate of the frequency of Hz. Next, we change the phase of the field at random each 10 ms, so once per period, within the 1 s window. The results for this case are shown in Figs. 5(c) and 5(d). Here, the measured field amplitude is nT and the frequency estimate Hz. This exemplifies that the less temporal the structure of the signal remains, as here the phase changes once per period, the harder it becomes to find the frequency, while the amplitude detection is not affected. It should be noted that for both measurements, there is a slow change in the background (which effectively means that the local mean at t = 0 s and at t = 1 s are not the same) as there is no insulation in our experimental setup, which degrades the results slightly.
III. DISCUSSION
The first advantage of the std method is that the measured signal can include randomness. The repetition time of the arbitrary phase changes in our measurement (once per period) is rather short, yet the amplitude is found accurately still. Thus, the method is resistant to changes in phase and, in principle, to frequency changes as well (e.g., it can measure a chirp, which could be important for radar). When the signal amplitude is sufficiently large, the randomness of the signal itself limits the sensitivity. The onset of this region depends on the level of randomness as indicated by the dots in Fig. 2(d). Practically, for detection purposes, this is irrelevant, as a lower sensitivity is sufficient for detecting a larger signal. Moreover, the limited range of quantum sensors does not allow measuring large signals without high-dynamic range techniques,8 thus quantum sensors find their applications mostly in detecting small signals. For example, for dark matter, its coherence time is about 1.6 × 105 periods of the signal,12 which is much longer than the half-life of ten periods we show in Figs. 2(d) and 2(e) in order to illustrate the effect. Furthermore, the changes in phase are generally not discrete, but continuous. There can also be data points at the moment a phase changes, but as long as the interaction delay is much shorter than the time frame of the random occurrences, these have little effect as well. If the sensor would be moving with respect to the source constantly, and/or no precise time synchronization is available, then both the amplitude and the effective phase can change over time randomly. The amplitude can still be estimated, and for just detection of whether there is a signal at all, it works robustly. It should be noted that although the averaged amplitude of a randomly oriented sinusoidal goes to zero, its std does not.
The second advantage is the limited memory requirement. For single N–V centers, the storage is rather limited, as the average number of photons per measurement sequence is currently ≪1, so the probability of measuring two or more photons is fairly negligible. Thus, storing data for a few numbers (say four bins for 0, 1, 2, and more photons) is sufficient. For an ensemble of N–V centers, the number of photons is much higher. However, the data can be stored in a limited number of bins defined by the requested accuracy. The width of the bin itself does not define the accuracy, as the accuracy of the std is significantly better. For example, 5000 measurements with storing two numbers (say bin centers at 0 and 1 as example) has a resolution of 4 × 10−8 at its most sensitive point with a difference between the two bin centers of 1, thus over seven orders of magnitude larger. Of course, for estimating the frequency, multiple sets of bins are required depending on the desired resolution of the estimation.
The previous paragraph discussed the standard two-pass algorithm to compute the std. Alternatively, several one-pass algorithms exist,24 which limit the memory requirement to only two numbers to be saved for any case. It is important to take the round-off errors into account for such one-pass algorithms. Furthermore, when the average of the signal is zero or adjusted to zero, which is often the case, measuring will yield the variance without any errors, while only a single number needs to be stored. Of course generally, for e.g., dark matter experiments, all data would be stored anyway, but for practical applications, this would be an advantage. For all algorithms, including the standard two-pass one, the computational complexity is only .
For the regime σn < σy, the sensitivity of the std method is the same as for the low-frequency method,11 which is worse compared to measuring a constant signal with the Ramsey sequence. It should be noted that opposed to the intuition given there,11 this factor is actually related to the shape of the underlying signal, a sinusoidal in this case, and would be different for other signals. For example, for a square wave, the sensitivity would be the same as that of the Ramsey method for constant fields. If the signal distribution is unknown, it could be estimated from the measured data points (thus storing all data points in this case), if the distribution of the noise is known. Although the exact shape of the signal does not follow unambiguously (e.g., the distribution of a sine is frequency-independent), the amplitude can be estimated.
As delay τ for the FID sequence, the optimum for the low-frequency method is used, since the pulse sequence is similar.11 For the regime σn ≪ σy, as the sensitivity is the same, this is also optimum for the std technique. However, when looking at the detection limit, the dependency is . Therefore, the optimum delay for detection could be different. When numerically optimizing this for our case with the N–V center, a slightly longer delay is optimal (about ). Since the delay that we use results in a slightly worse sensitivity only (about 2.8%), the studied behavior for the purely mathematical case is not expected to be significantly different for the N–V center.
A downside of the std method is the reduction in sensitivity when the signal has a low std (thus amplitude) compared to the noise. Thus, only if the signal has no randomness, a conventional method has a better sensitivity here. There are many signals with rather small amplitudes, for example, in the search for dark matter.12 However, as the randomness of these signals still allows many stable periods, it is possible to accumulate signal over multiple (but far from all) periods, hence reducing both the effective std of the noise σn and the number of data points N. Other ways to improve the std of the noise are increasing the number of N–V centers, improving the readout (improved collection efficiency, single-shot readout) and entanglement.
Another limitation is that there is a maximum supported signal frequency because the FID sequence is sampling the signal. Thus, the frequency should be below the Nyquist frequency, which could be increased by decreasing the delay at the expense of the sensitivity. However, an alternative is to use ac sequences, such as the Hahn-echo sequence. When the signal is synchronized perfectly to the sequence, the readout signal is at its minimum or maximum. On the other hand, if it is exactly out-of-sync, the readout signal is zero [see Fig. 6(a)]. Therefore, when the phase of the signal is changing over time (or alternatively, the sequence starts can be randomized), the resulting data points also have a sine-like distribution. The measurements in Fig. 6(b) show that the sensitivity for B of the Hahn-echo method without randomness is better than the sensitivity of the std method with randomness. Since the sensitivity for the std would be the same as for the Hahn-echo method, it is indeed expected, given Eq. (3), that the sensitivity for the amplitude is worse, although instead there are the std-method advantages. It should be noted that the bandwidth of such an ac measurement is limited to a narrowband defined by the delay.
In conclusion, we have introduced the std method for quantum sensing. Quite long measurements under stringent circumstances are possible, even while the rate of randomness of the signal can result in significant changes during the measurement time. Furthermore, the potential exists to limit the memory requirements to a single number and the power usage due to the linear computational complexity. Its sensitivity is similar to those of other coherence-based methods for signals stronger than the noise level and lower otherwise, which can be battled in several ways. This method is promising for applications that require robust long measurements for unknown signals and for unstable signals of which only the distribution can be measured, such as in harsh deep-sea mineral searches and in the search for dark matter. Moreover, it is a motivation to carefully explore other measurable statistics to find their specific merits.
ACKNOWLEDGMENTS
The authors acknowledge the financial support from KAKENHI (Grant No. 22K14560), MEXT Q-LEAP (Grant No. JPMXS0118067395), CREST (Grant No. JPMJCR23I5), and the Collaborative Research Program of ICR, Kyoto University (2021-114).
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
E. D. Herbschleb: Conceptualization (equal); Formal analysis (lead); Funding acquisition (supporting); Investigation (lead); Writing – original draft (lead); Writing – review & editing (equal). S. Chigusa: Conceptualization (equal); Writing – review & editing (equal). R. Kawase: Resources (lead); Writing – review & editing (equal). H. Kawashima: Resources (supporting); Writing – review & editing (equal). M. Hazumi: Supervision (supporting); Writing – review & editing (equal). K. Nakayama: Supervision (supporting); Writing – review & editing (equal). N. Mizuochi: Funding acquisition (lead); Supervision (lead); Writing – review & editing (equal).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
APPENDIX A: LARGE SIGNAL
It should be noted that this is independent of the signal shape. Even if it is a delta-like function, for a large enough σy, at least one signal value will be large enough, so the above is true with M = 1. We confirmed these results by numerical simulations.
APPENDIX B: SINUSOIDAL UNCERTAINTY
For estimating the noise std σn outside the linear regime of the sinusoidal shown in Fig. 3, its shape needs to be taken into account, as effectively the sine stretches the tails of the distribution when converting it from FID data to B. Moreover, the FID data can, due to shot-noise, have values outside the limits of this sinusoidal curve. Hence, we calculate σn numerically by generating data that have a normal distribution with an average equal to the offset of the sinusoidal shown in Fig. 3 and a standard deviation of σshot-noise. Then, we convert these data to magnetic field data while limiting the generated data within the range of the sine. Finally, we compute the std of this signal, which is σn. Within the linear regime, this gives the same result as shown in Fig. 3.