Quantum sensing has a bright future for applications in need of impeccable sensitivities. The study of periodic fields has resulted in various techniques, which deal with the limited coherence time of the quantum sensor in several ways. However, the periodic signal to measure could include forms of randomness as well, such as changes in phase or in frequency. In such cases, long measurement times required to detect the smallest of field amplitudes hamper the effectiveness of conventional techniques. In this paper, we propose and explore a robust sensing technique to combat this problem. For the technique, instead of measuring the signal amplitude directly, we measure another global property of the signal, in this case the standard deviation. This results in a much-improved sensitivity. We analyze the advantages and limitations of this technique, and we demonstrate the working with a measurement using a nitrogen-vacancy center. This work encourages scouting measurements of alternative statistics.

Owing to its sensitivity and potential high spatial resolution, quantum sensing has found applications in various fields. Given their excellent quantum properties even under ambient conditions and their atomic size,1 nitrogen-vacancy (N–V) centers are a promising option. Example applications are nuclear magnetic resonance,2 spin-wave measurements,3 measuring of neuronal action potentials,4 and imaging the hydrodynamic flow of current.5 Various techniques allow studying a variety of periodic signals, such as the classical Hahn-echo sequence for a synchronized measurement,6,7 multi-area sequences for high-dynamic range,8 Fourier transform for long-data frequency components,2,9 employing virtual transitions of periodically driven N–V centers for high-frequency signals,10 and fitting techniques for low-frequency signals.11 However, there are scenarios where none of these techniques are ideal.

One such scenario is when the signal of interest is periodic, while in this signal randomness manifests that potentially changes its direction or oscillation phase. This can be explored with a signal that has an arbitrarily changing phase. A technique such as Hahn-echo would, therefore, average over many phases, thus resulting in a decrease in the signal [see Figs. 1(a) and 1(b)]. An example of such a signal appears in the search for dark matter.12,13 Some of the dark matter candidates, e.g., the wave dark matter,14,15 have a finite coherence time and change velocity and phase at random over time. Since the effective amplitude of the dark matter signal is rather small, measurement times much longer than the time scale of such changes are required.

FIG. 1.

Method. (a) A regular signal (red dotted line) and a signal with randomness modeled with arbitrary phase jumps (blue line). Both are periodic in nature with an amplitude A; the shaded areas indicate the lengths of single periods of the underlying signal. (b) Conventional amplitude-based methods accumulate data over time, which results in a good representation for the regular signal (red crosses), but amplitude information becomes weaker for the signal with randomness (blue circles). (c) Alternatively, the sampled signal (blue dots for the signal with phase jumps), labeled yi for time ti with i ∈ {0, N − 1}, can be processed differently. (d) The resulting histogram for y given a large number of data points. As the signal is not a constant, the data have a non-zero standard deviation σy, which depends on the distribution of the signal. The aim of the method is to measure this σy to derive the amplitude of the signal.

FIG. 1.

Method. (a) A regular signal (red dotted line) and a signal with randomness modeled with arbitrary phase jumps (blue line). Both are periodic in nature with an amplitude A; the shaded areas indicate the lengths of single periods of the underlying signal. (b) Conventional amplitude-based methods accumulate data over time, which results in a good representation for the regular signal (red crosses), but amplitude information becomes weaker for the signal with randomness (blue circles). (c) Alternatively, the sampled signal (blue dots for the signal with phase jumps), labeled yi for time ti with i ∈ {0, N − 1}, can be processed differently. (d) The resulting histogram for y given a large number of data points. As the signal is not a constant, the data have a non-zero standard deviation σy, which depends on the distribution of the signal. The aim of the method is to measure this σy to derive the amplitude of the signal.

Close modal

In the following, we present a measurement technique based on finding the standard deviation of the signal. Such a method is less affected by several kinds of randomness of the signal for its detection, thus increasing robustness, and requires little data memory. We analyze the technique in depth analytically and by numerical simulation to explore its properties. Finally, we perform example measurements using a single N–V center at room temperature.

The proposed technique is shown in Fig. 1. A signal is sampled at each time ti, and the standard deviation (std) of the data is computed. A std, for specific stds we use the symbol σ, is defined as
(1)
with N being the number of data points, yi being the ith data point, and μ being the average of the data points.
If the signal is not a constant, it has a non-zero std. In this paper, we will study a sinusoidal signal,
(2)
where ti is the time for the ith data point, A is the amplitude, f is the frequency, ϕ is the phase, and O is an offset. The std σy for this signal is [when measuring a sufficient number of periods; see Fig. 1(d)]
(3)
This is directly proportional to the amplitude of the signal. Therefore, by finding the std of the signal, we can derive its amplitude. It should be noted that the exact relation depends on the shape of the signal; for example, for a square wave it would be σy = A. It should also be noted that, even if the amplitude of the signal changes over time, the following argument applies by replacing A with Aeff, where Aeff is an “average” of the amplitude according to its distribution.

The std of data depends on their distribution. Hence, as long as the distribution does not change, the std remains the same. A change of the phase at random moments, or of the frequency, does not change the distribution, and thus, the std remains the same inherently. This is reflected in the independence of Eq. (3) on the phase and frequency. It should be noted that although the std seems independent of the offset as well, a gradual change in offset, opposed to changes in the frequency and phase, would change the average, thus actually affecting the std. However, the std of a finite signal with, e.g., random phase changes does become a random variable, with σy its expected value, since the result of Eq. (1) will vary depending on the precise data. In addition, for the sinusoidal of Eq. (2), computing the std for a finite signal could actually result in a slightly different expected value, but this change is negligible compared to other uncertainties relevant here when many periods are measured.

Without noise, the sensitivity of any method would be perfect. Therefore, we have to analyze the noise to determine the sensitivity of this method. Here, we assume that there is a white noise source with a normal distribution that has an average of 0 and an std σn. It should be noted that σn is a property of the noise, thus a constant. Figure 2(a) shows an example of data consisting of noise only. The std method’s principle does not change for other noise sources, but the sensitivities derived later might. Nonetheless, the noise model is a fair representation for many sources, such as counted photons. Moreover, when utilizing sufficiently large ensembles of sensors, the central limit theorem states that the resulting distribution will be normal.

FIG. 2.

Noise and sensitivity. (a) Simulated data points (black dots) measured over time. Here, there is only noise, which has a normal distribution with σn = 10 (indicated with horizontal magenta dashed lines around the teal-dashed-line mean). (b) Histogram of a large number of data points as in panel (a). The expected Gaussian shape is visible. (c) Simulation results giving the data std σ̂D for N = 103. The N data points are generated according to Eq. (2) while adding random noise as in panel (a), and this is repeated 105 times to get the plotted distributions. This distribution of σ̂D depends on the signal’s std σy. For σnσy, σD is close to σn (vertical olive dotted lines in the left three plots), while for σnσy, σD is close to σy (vertical blue dotted lines in the right two plots). By finding the change Δσ (middle plot), the signal’s σy and hence its amplitude can be found. (d) Simulation results for σσ̂y by varying σy. For σy < σn, the uncertainty is limited by the noise; from σy = σn, it is flat; and at some point when σy > σn, it is limited by the randomness of the signal (the slanted blue dashed line plots this limit for N = 103), which scales with the signal’s amplitude. The onset of this regime depends on the level of randomness of the signal (gray dashed lines for N = 107, leftmost completely random, remainder with half-life of 10n periods with n = 0 − 5). The uncertainties for two more conventional methods (slanted black dashed–dotted line11 and slanted cyan dashed line for Hahn-echo) are independent of N. Panel (e) is as panel (d), but the uncertainty is converted to the sensitivity via Eq. (9). The sensitivities for the more conventional methods [same lines styles as in (d)] become worse for longer measurements when there is randomness. Hahn-echo is slightly worse, since even without phase randomness, it is already phase-dependent.

FIG. 2.

Noise and sensitivity. (a) Simulated data points (black dots) measured over time. Here, there is only noise, which has a normal distribution with σn = 10 (indicated with horizontal magenta dashed lines around the teal-dashed-line mean). (b) Histogram of a large number of data points as in panel (a). The expected Gaussian shape is visible. (c) Simulation results giving the data std σ̂D for N = 103. The N data points are generated according to Eq. (2) while adding random noise as in panel (a), and this is repeated 105 times to get the plotted distributions. This distribution of σ̂D depends on the signal’s std σy. For σnσy, σD is close to σn (vertical olive dotted lines in the left three plots), while for σnσy, σD is close to σy (vertical blue dotted lines in the right two plots). By finding the change Δσ (middle plot), the signal’s σy and hence its amplitude can be found. (d) Simulation results for σσ̂y by varying σy. For σy < σn, the uncertainty is limited by the noise; from σy = σn, it is flat; and at some point when σy > σn, it is limited by the randomness of the signal (the slanted blue dashed line plots this limit for N = 103), which scales with the signal’s amplitude. The onset of this regime depends on the level of randomness of the signal (gray dashed lines for N = 107, leftmost completely random, remainder with half-life of 10n periods with n = 0 − 5). The uncertainties for two more conventional methods (slanted black dashed–dotted line11 and slanted cyan dashed line for Hahn-echo) are independent of N. Panel (e) is as panel (d), but the uncertainty is converted to the sensitivity via Eq. (9). The sensitivities for the more conventional methods [same lines styles as in (d)] become worse for longer measurements when there is randomness. Hahn-echo is slightly worse, since even without phase randomness, it is already phase-dependent.

Close modal
To find an estimate for σn, we can simply measure the noise and compute the std of the resulting data [see Fig. 2(b)]. This is a sampled std, as each time we repeat this measurement, the resulting std will be slightly different given the random nature of noise. Therefore, the “sampled σn,” which we will indicate with a hat as σ̂n, is a random variable with a distribution that has σn as its expected value. The longer we measure, the more accurately we can find this expected value, thus the narrower the distribution becomes. The sampled variance follows a chi-squared distribution,16 and the std a chi distribution, leading to the std σ̂n of
(4)
with Γ being the gamma function and the approximation valid for N ≫ 1, which is generally the case for the envisaged long measurements [see Fig. 2(c)]. This shows that for longer measurements, which have a larger N, the accuracy increases, as expected.
Actual data consist of noise added to the signal, thus the datum at time ti is Di = ni + yi, with ni being the noise at time i. When the noise and signal are independent, which we reasonably assume is the case, the variances add as well. Thus, the std of the data σD is
(5)

Several examples of the distribution of a sampled σ̂D are shown in Fig. 2(c), where σy is varied while keeping σn constant.

The idea is to sense the signal amplitude A by measuring the std of the data. As A is directly proportional to σy [see Eq. (3)], we explore detecting σy first. As a reminder, it should be noted that the symbols σn and σy are the actual stds of the noise and the signal, respectively, while σ̂n and σ̂y are the sampled values.

There are three regimes to study. The first two are when σnσy. In this case, we find a direct relation between the measured and the desired std as
(6)
Since the signal includes randomness, the sampled σ̂y, just like the sampled σ̂n, does have an uncertainty even in the absence of noise. We will not analyze this analytically in depth, as it depends on the details of the randomness of the signal. Since the uncertainty scales linearly with a multiplicative factor, thus σAy=Aσy, when increasing the signal and thus σy, eventually the main source of uncertainty is due to the signal’s randomness. Therefore, for a sufficiently large signal, the uncertainty increases linearly as
(7)
The relevancy of this regime is explored in the discussion section.
The second regime is when the uncertainty due to sampling of the signal only is negligible. Now, while Eq. (6) still applies, the std of σ̂D is [see Fig. 2(c) fourth plot]
(8)
The reason for the change in std, compared to Eq. (4), is explained in  Appendix A. Given Eq. (6), the distribution for σ̂y is the same as for σ̂D. The sensitivity η of a measurement is defined as17 
(9)
with σ being the uncertainty of the variable to measure and Tmeas being the measurement time. With TmeasN, we get for σnσy,
(10)
The third regime is when σnσy. In this case, aiming for a relation including σy, we find
(11)
where the Taylor expansion around x = 0 for 1+x results in the approximation. Here, we want to detect a change in std compared to when there is only noise. This change Δσ is [see Fig. 2(c) center plot]
(12)
Since there is only an offset between Δσ and σD, their sampling distributions are the same. As the signal is small, the std for σ̂D, and thus for Δσ̂, remains as given in Eq. (4). However, σyΔσ, thus the distribution of the square root of Δσ̂ is required instead. Often, such a distribution has no analytical solution, as the variable can have negative values. To get an idea of the behavior, we can imagine decreasing σy. The smaller the signal is, the closer the mean of Δσ̂ gets to zero. Since the distribution width does not change [see Fig. 2(c) left plots], the std of its square root becomes worse (e.g., a width of 5 around a mean of 100 has a square-root-width of 0.25, while the same width at 10 has a square-root-width of 0.80). The negative part of Δσ̂ has no meaningful square root value, and it is set to zero, as σy would be estimated to be zero.
This small-signal regime is important for detection purposes. Therefore, we will determine the detection limit. There are several methods with similar performance to find this limit, such as Welch’s t-test18 or ANOVA,19 and here, we use hypothesis testing. The null-hypothesis is chosen as H0: σy = 0, and as alternative hypothesis, we have H1: σy > 0. As it is common for dark-matter particle searches to use a confidence level of 95%, and we have a one-sided test, we reject the null-hypothesis when the measured Δσ1.645σσ̂n (the chi distribution approaches a normal distribution for large N). At this threshold, we find the smallest signal that we can detect, thus
(13)

To verify the derivations above, we simulate a signal with random noise sufficient times to accurately find the distributions of the resulting uncertainty for the variable to measure, σy. In this numerical simulation, we randomly change the phase of the signal as if this phase has a half-life of ten periods. The results are shown in Fig. 2(d); they are consistent with Eqs. (7), (8), and (13) of the three explored regimes. The sensitivities for these simulations are shown in Fig. 2(e), showing consistency with Eq. (10).

For the case of N = 107, we vary the randomness of the phase. For a completely random phase, in which case, essentially a distribution is measured, there is still an optimum at about σy = σn, but the randomness of the signal limits the uncertainty immediately for stronger signals. When reducing the randomness, the flat area, where the sensitivity is the same as for a conventional method (only if there is no randomness), increases in size.

For the sake of comparison, we simulate the conventional Hahn-echo technique and another method,11 as it shows some interesting points. As the accumulated result needs to be adapted for the randomness, we scale the resulting simulated distribution of the measured σ̂y to have the expected average σy, so that we get a fair estimate of the uncertainty. This does not work for the high-noise regime, as simply the uncertainty of the noise would be scaled to the signal value, thus giving an incorrect estimate. In Fig. 2(d), the resulting uncertainty shows to be independent of N. The reason is that an increase in N, on the one hand, reduces the averaged noise, while on the other hand, it increases the accumulated randomness of the signal; these effects compensate each other exactly. Hence, in Fig. 2(e), it is shown that the longer the measurement time (larger N), the better the relative sensitivity of the std method compared to a conventional method becomes. Therefore, the number of orders of magnitude improvement strongly depends on the level of randomness and the measurement time.

Finally, the desired sensitivity is that of the amplitude A of the signal of Eq. (2). Since the relation between A and the currently investigated variable σy is linear [see Eq. (3)], the std and hence the sensitivity simply scale by 2 for a sinusoidal signal.

Here, we will explore the sensitivity of measuring a magnetic field with a single N–V center. The measured signal stems from photons emitted by the N–V center, whose intensity depends on the spin state. When performing a measurement, here a free-induction decay (FID) sequence 7,20 [see Fig. 3(a)], the magnetic field of each data point follows from the gradient, as shown in Fig. 3(b). This gradient depends on the parameters of the N–V center and the chosen interaction delay [τ in Fig. 3(a)].17 

FIG. 3.

N–V center measurement. (a) The data points yi of Fig. 1(c) are obtained via FID sequences. (b) Data of a single FID sequence when varying a constant magnetic field B (data with black crosses, fit with the red line). An oscillation occurs, as the electron spin of the N–V center (top-right inset) is rotating further with increasing field magnitude. The uncertainty in the measurement stems from shot-noise, which has a normal distribution with std σshot-noise for sufficiently many photons. In the linear regime, the measured data are converted to the magnetic field via the gradient grad. Thus, the std σn of the noise that is added to the signal of interest follows via the gradient as well.

FIG. 3.

N–V center measurement. (a) The data points yi of Fig. 1(c) are obtained via FID sequences. (b) Data of a single FID sequence when varying a constant magnetic field B (data with black crosses, fit with the red line). An oscillation occurs, as the electron spin of the N–V center (top-right inset) is rotating further with increasing field magnitude. The uncertainty in the measurement stems from shot-noise, which has a normal distribution with std σshot-noise for sufficiently many photons. In the linear regime, the measured data are converted to the magnetic field via the gradient grad. Thus, the std σn of the noise that is added to the signal of interest follows via the gradient as well.

Close modal

Owing to photon counting, here, the relevant noise source is shot-noise.21 In the linear regime, the relation between this shot-noise std and the std of the output noise is derived from the gradient, as shown in Fig. 3(b). Outside the linear regime, the std of the intensity needs to be converted taking the sinusoidal shape resulting from the rotational symmetry of the spin into account, as explored in  Appendix B.

Combining Eq. (10) with Fig. 3(b), consequently for σnσy, the sensitivity becomes
(14)
which is exactly the same as for the low-frequency method that measures the amplitude directly via the Ramsey sequence11 (if there would be no randomness), as shown in the middle region of Fig. 2(e). It should be noted that for the small-signal case, again only if the signal has no randomness, a conventional method would have a better sensitivity than the std method, while for the large-signal case, the sensitivities would be equal [see Fig. 2(d) for N = 107 with decreased randomness].

Since each data point follows essentially from integrating the magnetic field between the microwave pulses, if the randomness is so extreme that the phase flips during this time as well, the sensitivity becomes severely worse. Thus, practically, the std method has a limit on the frequency of randomness it can handle.

It is possible to find an estimate for the frequency of the measured signal via the std method. When averaging over groups of M data points, and computing the std of the resulting signal, the std will change toward zero. For example, the original signal of Fig. 1(c) has a data point each 1.75 ms and its std is given by the blue dot in Fig. 4(a). After averaging over two data points (M = 2), which has an “averaged time” of 3.5 ms, as shown by the inset of Fig. 4(a), the std decreases, as indicated by the red cross shown in Fig. 4(a). The reason for the decrease is twofold: the noise is averaged, thus σn, filtered=σn/M, and averaging a sinusoidal decreases its amplitude toward zero eventually, thus reducing its std [see Eq. (3), which is explored in  Appendix C].

FIG. 4.

Frequency estimation. (a) Signal std vs averaged time. By averaging groups of data points (inset, blue dots), for example, per two as shown in the inset (resulting in red crosses), the std decreases. When averaging over a full period, each data point becomes the same, hence the std becomes zero. This allows us to estimate the frequency. (b) In the plot of panel (a), there is no noise added to the signal. Here, there are two sources of noise added: noise with a normal distribution and a slowly changing background. The offset compared to panel (a) is caused by the slowly changing background. The black dashed lines in both plots are fits using Eq. (C4).

FIG. 4.

Frequency estimation. (a) Signal std vs averaged time. By averaging groups of data points (inset, blue dots), for example, per two as shown in the inset (resulting in red crosses), the std decreases. When averaging over a full period, each data point becomes the same, hence the std becomes zero. This allows us to estimate the frequency. (b) In the plot of panel (a), there is no noise added to the signal. Here, there are two sources of noise added: noise with a normal distribution and a slowly changing background. The offset compared to panel (a) is caused by the slowly changing background. The black dashed lines in both plots are fits using Eq. (C4).

Close modal

Therefore, by computing the std for a range of M, minima appear when the averaged time is around an integer multiple of the period of the signal. Hence, by locating these zeros, or more precisely, by fitting the result to Eq. (C4), it is possible to find the frequency. It should be noted that opposed to the amplitude detection, the data points cannot be fully mixed (some temporal structure should remain), and of course, the frequency should not change drastically. Thus, the more randomness, the less accurately the frequency can be found by any method.

The sample measured is epitaxially grown onto an Ib-type (111)-oriented diamond substrate by microwave plasma-assisted chemical-vapor deposition with enriched 12C (99.99%) and with an estimated phosphorus concentration of 1 × 1015 atoms cm−3.17,22,23 We use a standard in-house built confocal microscope to target individual electron spins residing in single N–V centers. Via a thin copper wire, microwave (MW) pulses are applied. Since the nitrogen atom causes hyperfine splitting of the ms = ±1 states, each energy difference is addressed with a separate MW source to ensure maximum contrast in our measurements. Magnetic fields are induced with a coil near the sample. All the experiments are conducted under ambient conditions.

A single N–V center with an inhomogeneous dephasing time T2*0.8 ms is used for the experiment. We use an interaction delay of τ = 0.4 ms, which is optimal for dc measurements 17 (comments on the optimum follow in the discussion). To collect sufficient data within our 1 s measurement window (which, including overhead, sets N = 2361), and to demonstrate a way to limit σn, we accumulate photons for 5000 iterations of this 1 s window. Since photon shot-noise is the main source of noise, this sets σn to 1/5000Nphotons/grad, with Nphotons being the number of photons read out per FID subsequence.17 In order to study the effect of signal randomness, we choose a frequency of 100 Hz, which allows us to change its phase over several periods. The amplitude of the applied magnetic field is 12 nT. With σn ≈ 5 nT (see Fig. 3), we have σnσy, which behaves as the center regime [see Fig. 2(d)].

The results for a regular signal (no randomness) are shown in Figs. 5(a) and 5(b). It finds a field amplitude of 12.10.1+0.1 nT, and an estimate of the frequency of 100.00.4+0.4 Hz. Next, we change the phase of the field at random each 10 ms, so once per period, within the 1 s window. The results for this case are shown in Figs. 5(c) and 5(d). Here, the measured field amplitude is 12.00.1+0.1 nT and the frequency estimate 74.81.3+1.4 Hz. This exemplifies that the less temporal the structure of the signal remains, as here the phase changes once per period, the harder it becomes to find the frequency, while the amplitude detection is not affected. It should be noted that for both measurements, there is a slow change in the background (which effectively means that the local mean at t = 0 s and at t = 1 s are not the same) as there is no insulation in our experimental setup, which degrades the results slightly.

FIG. 5.

Measurements. (a) FID data points (black dots) vs time for a region of the total measurement (1 s window). The red dashed line is a guide to the eye to illustrate that there are no random phase jumps. (b) Signal std (magenta circles) vs averaged time for data from panel (a). Fit with Eq. (C4) (black dashed line) estimates f ≈ 100 Hz. (c) FID data points vs time as in panel (a). Now, the guide to the eye illustrates that the signal is not regular. Here, there are random phase jumps once per period. (d) Signal std vs averaged time as in panel (b) for data from panel (c). The fit estimates f ≈ 75 Hz. This inaccuracy is due to the high repetition of the phase jumps, which lowers the temporal structure of the signal.

FIG. 5.

Measurements. (a) FID data points (black dots) vs time for a region of the total measurement (1 s window). The red dashed line is a guide to the eye to illustrate that there are no random phase jumps. (b) Signal std (magenta circles) vs averaged time for data from panel (a). Fit with Eq. (C4) (black dashed line) estimates f ≈ 100 Hz. (c) FID data points vs time as in panel (a). Now, the guide to the eye illustrates that the signal is not regular. Here, there are random phase jumps once per period. (d) Signal std vs averaged time as in panel (b) for data from panel (c). The fit estimates f ≈ 75 Hz. This inaccuracy is due to the high repetition of the phase jumps, which lowers the temporal structure of the signal.

Close modal

The first advantage of the std method is that the measured signal can include randomness. The repetition time of the arbitrary phase changes in our measurement (once per period) is rather short, yet the amplitude is found accurately still. Thus, the method is resistant to changes in phase and, in principle, to frequency changes as well (e.g., it can measure a chirp, which could be important for radar). When the signal amplitude is sufficiently large, the randomness of the signal itself limits the sensitivity. The onset of this region depends on the level of randomness as indicated by the dots in Fig. 2(d). Practically, for detection purposes, this is irrelevant, as a lower sensitivity is sufficient for detecting a larger signal. Moreover, the limited range of quantum sensors does not allow measuring large signals without high-dynamic range techniques,8 thus quantum sensors find their applications mostly in detecting small signals. For example, for dark matter, its coherence time is about 1.6 × 105 periods of the signal,12 which is much longer than the half-life of ten periods we show in Figs. 2(d) and 2(e) in order to illustrate the effect. Furthermore, the changes in phase are generally not discrete, but continuous. There can also be data points at the moment a phase changes, but as long as the interaction delay is much shorter than the time frame of the random occurrences, these have little effect as well. If the sensor would be moving with respect to the source constantly, and/or no precise time synchronization is available, then both the amplitude and the effective phase can change over time randomly. The amplitude can still be estimated, and for just detection of whether there is a signal at all, it works robustly. It should be noted that although the averaged amplitude of a randomly oriented sinusoidal goes to zero, its std does not.

The second advantage is the limited memory requirement. For single N–V centers, the storage is rather limited, as the average number of photons per measurement sequence is currently ≪1, so the probability of measuring two or more photons is fairly negligible. Thus, storing data for a few numbers (say four bins for 0, 1, 2, and more photons) is sufficient. For an ensemble of N–V centers, the number of photons is much higher. However, the data can be stored in a limited number of bins defined by the requested accuracy. The width of the bin itself does not define the accuracy, as the accuracy of the std is significantly better. For example, 5000 measurements with storing two numbers (say bin centers at 0 and 1 as example) has a resolution of 4 × 10−8 at its most sensitive point with a difference between the two bin centers of 1, thus over seven orders of magnitude larger. Of course, for estimating the frequency, multiple sets of bins are required depending on the desired resolution of the estimation.

The previous paragraph discussed the standard two-pass algorithm to compute the std. Alternatively, several one-pass algorithms exist,24 which limit the memory requirement to only two numbers to be saved for any case. It is important to take the round-off errors into account for such one-pass algorithms. Furthermore, when the average of the signal is zero or adjusted to zero, which is often the case, measuring yi2 will yield the variance without any errors, while only a single number needs to be stored. Of course generally, for e.g., dark matter experiments, all data would be stored anyway, but for practical applications, this would be an advantage. For all algorithms, including the standard two-pass one, the computational complexity is only O(N).

For the regime σn < σy, the sensitivity of the std method is the same as for the low-frequency method,11 which is 2 worse compared to measuring a constant signal with the Ramsey sequence. It should be noted that opposed to the intuition given there,11 this factor is actually related to the shape of the underlying signal, a sinusoidal in this case, and would be different for other signals. For example, for a square wave, the sensitivity would be the same as that of the Ramsey method for constant fields. If the signal distribution is unknown, it could be estimated from the measured data points (thus storing all data points in this case), if the distribution of the noise is known. Although the exact shape of the signal does not follow unambiguously (e.g., the distribution of a sine is frequency-independent), the amplitude can be estimated.

As delay τ for the FID sequence, the optimum for the low-frequency method is used, since the pulse sequence is similar.11 For the regime σnσy, as the sensitivity is the same, this is also optimum for the std technique. However, when looking at the detection limit, the dependency is N14. Therefore, the optimum delay for detection could be different. When numerically optimizing this for our case with the N–V center, a slightly longer delay is optimal (about 0.6T2*). Since the delay that we use results in a slightly worse sensitivity only (about 2.8%), the studied behavior for the purely mathematical case is not expected to be significantly different for the N–V center.

A downside of the std method is the reduction in sensitivity when the signal has a low std (thus amplitude) compared to the noise. Thus, only if the signal has no randomness, a conventional method has a better sensitivity here. There are many signals with rather small amplitudes, for example, in the search for dark matter.12 However, as the randomness of these signals still allows many stable periods, it is possible to accumulate signal over multiple (but far from all) periods, hence reducing both the effective std of the noise σn and the number of data points N. Other ways to improve the std of the noise are increasing the number of N–V centers, improving the readout (improved collection efficiency, single-shot readout) and entanglement.

Another limitation is that there is a maximum supported signal frequency because the FID sequence is sampling the signal. Thus, the frequency should be below the Nyquist frequency, which could be increased by decreasing the delay at the expense of the sensitivity. However, an alternative is to use ac sequences, such as the Hahn-echo sequence. When the signal is synchronized perfectly to the sequence, the readout signal is at its minimum or maximum. On the other hand, if it is exactly out-of-sync, the readout signal is zero [see Fig. 6(a)]. Therefore, when the phase of the signal is changing over time (or alternatively, the sequence starts can be randomized), the resulting data points also have a sine-like distribution. The measurements in Fig. 6(b) show that the sensitivity for B of the Hahn-echo method without randomness is 2 better than the sensitivity of the std method with randomness. Since the sensitivity for the std would be the same as for the Hahn-echo method, it is indeed expected, given Eq. (3), that the sensitivity for the amplitude is 2 worse, although instead there are the std-method advantages. It should be noted that the bandwidth of such an ac measurement is limited to a narrowband defined by the delay.

FIG. 6.

Ac std method. (a) When measuring an ac field with a Hahn-echo sequence, the resulting data point depends on the phase. (b) Hahn-echo signal (for black crosses with the red line fit, blue pluses, and magenta dots) vs magnetic field amplitude; or std of Hahn-echo signal (for blue squares and magenta circles) vs magnetic field amplitude (offset for gradient comparison) for 100 measured periods of a 1 kHz signal. We compare the gradients in the linear regime as an indicator of the sensitivities, as the noise is the same for all measurements (Hahn-echo would use N points, resulting in a noise of σn/N, which is in this case equal to the noise on the std from Fig. 2(c) fourth plot). Without phase jumps, the conventional Hahn-echo method performs well, while with phase jumps, the performance quickly deteriorates, as expected from Fig. 2(e). However, the performance of the std method does not depend on the randomness, and its gradient is 2 less steep compared to the Hahn-echo method without randomness.

FIG. 6.

Ac std method. (a) When measuring an ac field with a Hahn-echo sequence, the resulting data point depends on the phase. (b) Hahn-echo signal (for black crosses with the red line fit, blue pluses, and magenta dots) vs magnetic field amplitude; or std of Hahn-echo signal (for blue squares and magenta circles) vs magnetic field amplitude (offset for gradient comparison) for 100 measured periods of a 1 kHz signal. We compare the gradients in the linear regime as an indicator of the sensitivities, as the noise is the same for all measurements (Hahn-echo would use N points, resulting in a noise of σn/N, which is in this case equal to the noise on the std from Fig. 2(c) fourth plot). Without phase jumps, the conventional Hahn-echo method performs well, while with phase jumps, the performance quickly deteriorates, as expected from Fig. 2(e). However, the performance of the std method does not depend on the randomness, and its gradient is 2 less steep compared to the Hahn-echo method without randomness.

Close modal

In conclusion, we have introduced the std method for quantum sensing. Quite long measurements under stringent circumstances are possible, even while the rate of randomness of the signal can result in significant changes during the measurement time. Furthermore, the potential exists to limit the memory requirements to a single number and the power usage due to the linear computational complexity. Its sensitivity is similar to those of other coherence-based methods for signals stronger than the noise level and lower otherwise, which can be battled in several ways. This method is promising for applications that require robust long measurements for unknown signals and for unstable signals of which only the distribution can be measured, such as in harsh deep-sea mineral searches and in the search for dark matter. Moreover, it is a motivation to carefully explore other measurable statistics to find their specific merits.

The authors acknowledge the financial support from KAKENHI (Grant No. 22K14560), MEXT Q-LEAP (Grant No. JPMXS0118067395), CREST (Grant No. JPMJCR23I5), and the Collaborative Research Program of ICR, Kyoto University (2021-114).

The authors have no conflicts to disclose.

E. D. Herbschleb: Conceptualization (equal); Formal analysis (lead); Funding acquisition (supporting); Investigation (lead); Writing – original draft (lead); Writing – review & editing (equal). S. Chigusa: Conceptualization (equal); Writing – review & editing (equal). R. Kawase: Resources (lead); Writing – review & editing (equal). H. Kawashima: Resources (supporting); Writing – review & editing (equal). M. Hazumi: Supervision (supporting); Writing – review & editing (equal). K. Nakayama: Supervision (supporting); Writing – review & editing (equal). N. Mizuochi: Funding acquisition (lead); Supervision (lead); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

The goal of this appendix is to give some insight into the difference between the σσ̂D for small and large signals, for a regular signal. The std of sampled data with both noise and signal is
(A1)
with ni being the noise at the ith data point [for example, the points shown in Fig. 2(a)]. Since we try to find the distribution, we consider the specific data to result in a sampled σ̂D.
When the signal is zero, we have
(A2)
with the approximation since the noise follows a normal distribution with zero mean. As a large proportion of ni is of a similar magnitude, we could look at the rough approximation,
(A3)
Thus, the std of σ̂D has roughly a half-normal distribution given its dependence on n, as the normal distribution for the noise n itself is symmetric around its zero mean. The std of such a distribution would be σn12π, thus we get
(A4)
This is smaller than σn/N, just like the non-approximated distribution, which has an uncertainty of σn/2N, as shown in the left plot of Fig. 2(c).
When the signal is large, we get via Eq. (A1),
(A5)
with the approximation generally possible by adding a suitable offset to the signal, which does not change the std or its distribution. All data points with a relatively small signal have a small effect on the result, which we will neglect. The remaining M non-negligible points have a similar magnitude, so we could look at the rough approximation again,
(A6)
Now, since yjnj ensures the distribution of n is not around zero and thus remains intact, σ̂D has a normal distribution with std σn/N1. This is about the std of σ̂D shown in the fourth plot of Fig. 2(c).

It should be noted that this is independent of the signal shape. Even if it is a delta-like function, for a large enough σy, at least one signal value will be large enough, so the above is true with M = 1. We confirmed these results by numerical simulations.

For estimating the noise std σn outside the linear regime of the sinusoidal shown in Fig. 3, its shape needs to be taken into account, as effectively the sine stretches the tails of the distribution when converting it from FID data to B. Moreover, the FID data can, due to shot-noise, have values outside the limits of this sinusoidal curve. Hence, we calculate σn numerically by generating data that have a normal distribution with an average equal to the offset of the sinusoidal shown in Fig. 3 and a standard deviation of σshot-noise. Then, we convert these data to magnetic field data while limiting the generated data within the range of the sine. Finally, we compute the std of this signal, which is σn. Within the linear regime, this gives the same result as shown in Fig. 3.

Averaging over several data points effectively results in the piece-wise integrated signal (for clarity, we ignore the offset O as it does not affect the std),
(C1)
with τ being the averaging time and ϕi being the phase at the start of averaging for the ith averaged data point. This solves to
(C2)
Simplifying this equation results in a sinusoidal for which we know the std [see Eq. (3)], since on average ϕi is uniformly distributed over 0,2π; the simplified sinusoidal is
(C3)
Taking the std of the above-mentioned equation, and adding an averaged noise std depending on the number of averaged data points M, and a constant std C/12 to model the variance of a linearly changing background resulting in the offset shown in Fig. 4(b), gives the used fitting function,
(C4)
1.
C. L.
Degen
,
F.
Reinhard
, and
P.
Cappellaro
, “
Quantum sensing
,”
Rev. Mod. Phys.
89
,
035002
(
2017
).
2.
D. R.
Glenn
,
D. B.
Bucher
,
J.
Lee
,
M. D.
Lukin
,
H.
Park
, and
R. L.
Walsworth
, “
High-resolution magnetic resonance spectroscopy using a solid-state spin sensor
,”
Nature
555
,
351
354
(
2018
).
3.
T.
van der Sar
,
F.
Casola
,
R.
Walsworth
, and
A.
Yacoby
, “
Nanometre-scale probing of spin waves using single electron spins
,”
Nat. Commun.
6
,
7886
(
2015
).
4.
J. F.
Barry
,
M. J.
Turner
,
J. M.
Schloss
,
D. R.
Glenn
,
Y.
Song
,
M. D.
Lukin
,
H.
Park
, and
R. L.
Walsworth
, “
Optical magnetic detection of single-neuron action potentials using quantum defects in diamond
,”
Proc. Natl. Acad. Sci. U. S. A.
113
,
14133
14138
(
2016
).
5.
M. J. H.
Ku
,
T. X.
Zhou
,
Q.
Li
,
Y. J.
Shin
,
J. K.
Shi
,
C.
Burch
,
L. E.
Anderson
,
A. T.
Pierce
,
Y.
Xie
,
A.
Hamo
,
U.
Vool
,
H.
Zhang
,
F.
Casola
,
T.
Taniguchi
,
K.
Watanabe
,
M. M.
Fogler
,
P.
Kim
,
A.
Yacoby
, and
R. L.
Walsworth
, “
Imaging viscous flow of the Dirac fluid in graphene
,”
Nature
583
,
537
541
(
2020
).
6.
E. L.
Hahn
, “
Spin echoes
,”
Phys. Rev.
80
,
580
594
(
1950
).
7.
A.
Schweiger
and
G.
Jeschke
,
Principles of Pulse Electron Paramagnetic Resonance
(
Oxford University Press
,
2001
).
8.
E. D.
Herbschleb
,
H.
Kato
,
T.
Makino
,
S.
Yamasaki
, and
N.
Mizuochi
, “
Ultra-high dynamic range quantum measurement retaining its sensitivity
,”
Nat. Commun.
12
,
306
(
2021
).
9.
E.
Kreyszig
,
Advanced Engineering Mathematics
(
Wiley
,
2011
).
10.
G.
Wang
,
Y.-X.
Liu
,
J. M.
Schloss
,
S. T.
Alsid
,
D. A.
Braje
, and
P.
Cappellaro
, “
Sensing of arbitrary-frequency fields using a quantum mixer
,”
Phys. Rev. X
12
,
021061
(
2022
).
11.
E. D.
Herbschleb
,
I.
Ohki
,
K.
Morita
,
Y.
Yoshii
,
H.
Kato
,
T.
Makino
,
S.
Yamasaki
, and
N.
Mizuochi
, “
Low-frequency quantum sensing
,”
Phys. Rev. Appl.
18
,
034058
(
2022
).
12.
S.
Chigusa
,
M.
Hazumi
,
E. D.
Herbschleb
,
N.
Mizuochi
, and
K.
Nakayama
, “
Light dark matter search with nitrogen-vacancy centers in diamonds
,” arXiv:2302.12756 [hep-ph] (
2023
).
13.
Particle Data Group
et al, “
Review of particle physics
,”
Prog. Theor. Exp. Phys.
2022
,
083C01
.
14.
M.
Kawasaki
and
K.
Nakayama
, “
Axions: Theory and cosmological role
,”
Annu. Rev. Nucl. Part. Sci.
63
,
69
95
(
2013
); arXiv:1301.1123 [hep-ph].
15.
D. J. E.
Marsh
, “
Axion cosmology
,”
Phys. Rep.
643
,
1
79
(
2016
); arXiv:1510.07633 [astro-ph.CO].
16.
K.
Knight
,
Mathematical Statistics
(
Taylor & Francis
,
2000
).
17.
E. D.
Herbschleb
,
H.
Kato
,
Y.
Maruyama
,
T.
Danjo
,
T.
Makino
,
S.
Yamasaki
,
I.
Ohki
,
K.
Hayashi
,
H.
Morishita
,
M.
Fujiwara
, and
N.
Mizuochi
, “
Ultra-long coherence times amongst room-temperature solid-state spins
,”
Nat. Commun.
10
,
3766
(
2019
).
18.
B. L.
Welch
, “
The significance of the difference between two means when the population variances are unequal
,”
Biometrika
29
,
350
362
(
1938
).
19.
R. A.
Fisher
, “
Studies in crop variation. I. An examination of the yield of dressed grain from Broadbalk
,”
J. Agric. Sci.
11
,
107
135
(
1921
).
20.
N. F.
Ramsey
, “
A molecular beam resonance method with separated oscillating fields
,”
Phys. Rev.
78
,
695
699
(
1950
).
21.
C. W. J.
Beenakker
and
M.
Patra
, “
Photon shot noise
,”
Mod. Phys. Lett. B
13
,
337
347
(
1999
).
22.
R.
Kawase
,
H.
Kawashima
,
H.
Kato
,
N.
Tokuda
,
S.
Yamasaki
,
M.
Ogura
,
T.
Makino
, and
N.
Mizuochi
, “
N-type diamond synthesized with tert-butylphosphine for long spin coherence times of perfectly aligned NV centers
,”
J. Appl. Phys.
132
,
174504
(
2022
).
23.
H.
Kato
,
M.
Ogura
,
T.
Makino
,
D.
Takeuchi
, and
S.
Yamasaki
, “
N-type control of single-crystal diamond films by ultra-lightly phosphorus doping
,”
Appl. Phys. Lett.
109
,
142102
(
2016
).
24.
T. F.
Chan
,
G. H.
Golub
, and
R. J.
LeVeque
, “
Algorithms for computing the sample variance: Analysis and recommendations
,”
Am. Stat.
37
,
242
247
(
1983
).