We develop a high-Q capacitive sensing based robust non-destructive evaluation (NDE) methodology that can be widely used in varied NDE applications. We show that the proposed method can detect defects in a host of robust regimes where uncertainties such as lift-off, probe tilt, fluctuations in sampling rates, and step sizes are inherent in the data collection process. We explicitly characterize the corruption in the capacitive sensing data due to various lift-off based uncertainties. We use a Bayesian decision theoretic approach to rigorously understand the impact of these corruptions on defect identification efficacy. Using an optimally tuned weighted classification loss, we prove that it is theoretically feasible to accurately detect defect location and sizes from capacitive sensing signals collected under the aforementioned uncertainties. The Bayesian decision theoretic study needs prior information for accurate detection that is not available in real NDE inspections. So, we develop a solely data driven algorithm that analyzes the capacitive sensing signals without any prior knowledge of defect or uncertainty types. The developed algorithm is non-parametric and uses spatially adaptive denoising to weed out uncertainty induced noises. By leveraging the spatial association in the capacitive sensing signals, our algorithm greatly improves on popular non-spatial approaches. Compared to popular thresholding methods and low-rank based denoising approaches, we demonstrate superior performance of the proposed method in terms of coverage and false positive metrics for defect identification. Using spatially adaptive denoising, we design a robust capacitive sensing method that can detect defects with high precision under various uncertainty regimes.

Structural integrity of safety-critical infrastructures such as aircrafts, bridges, nuclear power plants, oil and gas transmission pipelines, and load-bearing metal structures in highways decrease with time. Nondestructive evaluation (NDE) plays an important role in maintenance of these infrastructures by detecting significant erosions and other defects that crop up with usage. For providing effective diagnosis, NDE methods use several inline inspection techniques to access these structures without causing any further damage in a non-invasive manner.1 The advancement in sensing technology has led to the development of different NDE sensing modalities.2–14 However, often in practice, it is extremely difficult to set up new non-destructive inspection procedures abiding the ideal laboratory conditions under which these methods were developed and tested. As a result, in these situations, NDE data are collected under uncertainties such as lift-off, probe tilt, fluctuations in sampling rates, and step sizes.15,16 Contemporary NDE methods based on robotic systems can conduct very fast scans but often suffer from the aforementioned uncertainties due to anomalies in probe positioning. Lift-off is the variation of distance between the probe and material under test (MUT) whereas probe tilt is the angular deviation of probe from perpendicularity. NDE data are often contaminated by the presence of unknown lift-offs due to coarseness of MUT surface, non-conducting coatings, or irregular paintings on the MUT.17,18 While lift-offs and probe tilts are the most common uncertainties, fluctuations in the sampling rate and step sizes can also contaminate NDE data due to operator’s movement related discrepancies such as changes in frictional force between probe and material surfaces as well as misalignment of probe and MUT coordinates. These uncertainties in the data collection process give rise to different kinds of corruption in the recorded signals. Defect identification algorithms that were developed with sensor data collected under ideal laboratory conditions fail to provide accurate defect analysis under these uncertainties which corrupt data with various kinds of noise contaminations. Subsequently, there is an urgent need to develop robust, automated, reliable defect identification methods that can operate with high precision under uncertainties that practitioners encounter in real-world applications.19 

Here, we develop a high-Q capacitive sensing based shortwave NDE method which is low cost, highly flexible, and easily implementable as it does not require extensive hardware setup at MHz frequencies. Unlike other popular NDE methods, it can be employed under a wide range of robust conditions for it enjoys the following advantageous properties over the competing NDE methods: (a) unlike capacitive sensing, ultrasonic testing (UT) normally requires a coupling medium and physical contact with MUT.8 Air coupled UT, however, does not need coupling medium. But, it uses long sound pulse for excitation for which accurate measurement of timing is difficult. Moreover, it is not applicable for metal inspections due to high acoustic impedance of metal structures.20 (b) Thermographic methods have low sensitivity unless we use very expensive thermal camera for imaging and are non-robust to temperature variations.21,22 X-ray imaging is not only expensive but also needs proper screening because of the detrimental ionizing radiations.9 (c) Microwave imaging based NDE detection do not penetrate deep in conducting materials and use high frequency for operation in GHz ranges which require cumbersome data acquisition circuitry.23 (d) Conventional electro-magnetic (EM) based NDE techniques, such as magnetic flux leakage (MFL) and eddy-current (EC) methods, are mostly effective in inspecting highly conductive or ferromagnetic materials.2–4,10–14

We concentrate on the highly flexible, easily applicable capacitive sensing method and undertake a disciplined approach to understand how signals collected from defective samples get corrupted as uncertainties such as lift-off, speed, and fluctuations in sampling rates are introduced in the data collection process. In Sec. II, we describe the data collection methodology as well as the different robust setups considered in this study. Existing NDE literature mainly focused on the effect of lift-offs based uncertainties in EC and MFL methods. Lift-off corrections in these traditional EM techniques have been developed based on novel hardware-based probes,24,25 new sensor array designs,26 and by usage of signal processing techniques.17,27 Novel probing methods introduced in Ref. 17 use reference signals to cancel lift-off induced contaminations in pulse eddy current (PEC) signals whereas dual frequency mode of EC inspection is used in Ref. 28 for measuring lift-off and accurate defect sizing. Signal processing techniques based on dynamic trajectories of fast Fourier transform of PEC scans are used to reduce the lift-off effects in Ref. 29, and features extracted based on rise time, zero crossing, differential time to peak are used in Ref. 30. Unlike traditional EM methods, there has been very limited research on the scope of capacitive imaging (CI) under uncertainties. Specialized capacitive sensors have been designed to tackle lift-off effects31–34 but they have limited applications. Here, we develop a very easy-to-implement capacitive sensing probe and provide a systematic study on the impact of uncertainties on signals captured by it. Based on the study, we develop a novel spatial adaptation based signal processing algorithm that is exhibited to provide very high defect identification rates. The resulting NDE method is cost effective, less time consuming, and can be employed across a wide range of applications for it does not require any domain specific modifications or usage of complex probes or guidance of NDE experts.

In Sec. III, using a disciplined statistical framework, we compare the voltage readings recorded by the capacitive sensors under different kinds of uncertainties. We show that the corruption in the voltages due to uncertainties which decreases signal strength for defect defection is not merely due to homogeneous addition of noise over all the inspection readings. We analyze the differences in the capacitive sensing voltage readings collected under various kinds of lift-offs by Gaussian mixtures models. In each case, we document the target-to-background voltage comparisons where the target is the defective area amidst the background of non-defective scan points. Based on this mixture model framework, in Sec. IV, we report the theoretically possible lowest limit on the defect misclassification error rates across varied uncertainties by considering the Bayes estimator. In most NDE applications, it is much more harmful to misclassify defects as non-defects [which is termed as false negatives (FN) or “missing”] than to misclassify non-defects as defects [which is termed as false positives (FP) or “false alarm”]. In this context, we evaluated the Bayes error under the weighted classification loss and reported the minimum achievable FP rates when the FN rates are constrained below 10%, i.e., when we, at worst, underestimate 10% of the defect sizes. Based on experimental data collected under different types of uncertainties, we found that the FP rates were quite low even when FN rates are controlled below 10%. This suggests that it is theoretically possible to recover defect sizes under the uncertainties considered here.

Computation of the Bayes rules and errors reported in Sec. IV needed pertinent information on defects that is not available for real applications. Thus, those error rates mark the best theoretically possible errors that any non-spatial decision rule can obtain. In practice, it is very difficult to get a solely data driven procedure that achieves such error rates. In Sec. V, we propose a data driven classifier of defective scan points. The classifier borrows information across geographically close scan points and is spatially adaptive. Aggregating the data from nearby locations can increase the background-to-signal ratio and increase the defect detection rate under certainty. The proposed approach non-parametrically recovers the structure of the defects without prior knowledge on the uncertainties in the data collection process. We explain the benefits of spatial adaptation and show that the proposed method yields accurate defect identification in highly robust conditions.

Figure 1 shows the framework of the proposed study. In order to provide a disciplined analysis of robust defect identification using capacitive sensing, we further divide the study into the following components:

  1. The first stage involves collection of experimental data using common place capacitive sensing methods. In this stage, as described in Sec. II A, we set up a cheap capacitive-sensing probe with its parameters tuned to operate optimally in usual non-robust laboratory settings.

  2. Thereafter, varied types of uncertainties are systematically introduced in the data collection process through an experimental design (see Table I) that involves 36 experiments on defective samples containing a single defect of varying sizes.

  3. We compute the Bayes error in defect misclassification to understand the impact of corruption in the voltage signals recorded by capacitive sensing under uncertainties. The nature of the corruption under different types of uncertainties were characterized.

  4. Based on the above characterization, a data driven method was developed to adaptively denoise capacitive sensing based voltage signals. The proposed non-parametric method uses spatial filtering and is exhibited to detect defective scan points with very high accuracy.

FIG. 1.

Framework used to develop robust capacitive sensing methodology under uncertainty.

FIG. 1.

Framework used to develop robust capacitive sensing methodology under uncertainty.

Close modal
TABLE I.

Description of the 12 different experiments for which voltages based on capacitive sensing was recorded from damaged cheap steel samples under three different kinds of lift-off and the resulting voltage readings were analyzed in details to understand the impact of lift-off.

Experiment No.Step size (mm)Sampling rate (s/mm)Time taken (min)Data dimensionDefect sizeRemarks
100 5.17 8 000*80 Medium Medium time 
II 1 mm 500 5.17 40 000*80 Medium Medium time 
III 100 2.5 8 000*40 Medium Minimum time 
IV 500 2.5 40 000*40 Medium Minimum time 
100 5.17 8 000*80 Small Medium time 
VI 500 5.17 40 000*80 Small Medium time 
VII 0.5 500 10 40 000*160 Small Maximum time 
VIII 100 2.5 8 000*40 Small Minimum time 
IX 500 2.5 40 000*40 Small Minimum time 
100 5.17 8 000*80 Large Medium time 
XI 500 5.17 40 000*80 Large Medium time 
XII 0.5 500 10 40 000*160 Large Maximum time 
Experiment No.Step size (mm)Sampling rate (s/mm)Time taken (min)Data dimensionDefect sizeRemarks
100 5.17 8 000*80 Medium Medium time 
II 1 mm 500 5.17 40 000*80 Medium Medium time 
III 100 2.5 8 000*40 Medium Minimum time 
IV 500 2.5 40 000*40 Medium Minimum time 
100 5.17 8 000*80 Small Medium time 
VI 500 5.17 40 000*80 Small Medium time 
VII 0.5 500 10 40 000*160 Small Maximum time 
VIII 100 2.5 8 000*40 Small Minimum time 
IX 500 2.5 40 000*40 Small Minimum time 
100 5.17 8 000*80 Large Medium time 
XI 500 5.17 40 000*80 Large Medium time 
XII 0.5 500 10 40 000*160 Large Maximum time 

We use a capacitive sensing-based probe that consists of a parallel plate capacitor along with a coupled inductor coil. The capacitive probe detects the change in the dielectric constant due to distortion in electrostatic fields in the presence of defects in the sample. The voltage recorded by the probe across a grid of scan points produces image that can lead to defect identification.7,35,36

To understand the working principle of the probe, note that the resonant frequency fR of the sensor is inversely proportional to its net capacitance (C) as fR=1/(2πLC), where L is the inductance. We know that C=κAd1, where A is the area of the probe, d is the distance between the excitation and ground, and κ is the dielectric constant of the medium. In the presence of defects, κ changes as the medium is altered compared to non-defective scan points. This changes the capacitive loading C of the fringing field which impacts fR.

The sensor is designed as in Ref. 36. It is made up of Roger’s 4350 board with dielectric constant of 3.66 and thickness of 1.5 mm. A commercial inductor coil of 100 μH is used to form the LC tank. The experimental scanning setup consists of the following ingredients: (a) LC tank based capacitive probe with pickup coil; (b) scanning robot arm to move the probe along the sample; (c) power splitter and directional coupler; (d) Data Acquisition System (DAS) and code to produce image; and (e) other associated units.

We used Aerotech AGS1000 programmable XYZ scanner for scanning. The probe is mounted on the scanner. A radio frequency (RF) source is used as input. It is operated at resonant frequency of the probe which was estimated at 5 MHz when the medium was just air with no sample. The probe is read in the reflection mode. As shown in Fig. 1, it is connected to a power splitter which generates the reference and the measurement signal. The directional coupler is fed with the measurement signal and is connected with the pickup coil of the probe. As in Refs. 37 and 38, the reflected signal captured by the probe is passed from the directional coupler to the lock-in amplifier which generates voltage signal proportional to the difference in the reference and the measured reflected signals. This voltage signal thereafter is passed into the DAS consisting of a National Instrument Data Acquisition Card PCIe-6341. The voltage signal is sampled and digitized by a routine in the DAS, and the output is recorded. The complete experimental setup used in the laboratory is shown in Fig. 2.

FIG. 2.

Schematic of the entire experimental setup along with the designed capacitive sensor marked in red block.

FIG. 2.

Schematic of the entire experimental setup along with the designed capacitive sensor marked in red block.

Close modal

To understanding how data collected by capacitive sensors is altered as uncertainties increases in the system, we collect data across different regimes corresponding to varied defect types and capacitive sensing sampling methods. Table I describes 12 different regimes/experiments. For each experiment, we collect data under three scenarios: (1) capacitive sensing with no lift-off; (2) capacitive sensing with moderate lift-off of 3 mm; and (3) capacitive sensing with increased lift-off of 5 mm. Aside from the lift-off uncertainties, other variations were also incorporated in the capacitive sensing methods. These variations were introduced across the 12 different experiments and they were kept invariant across the three scenarios of the experiments. Table I shows the details for each experimental setup. Also, to reflect data collection uncertainties due to non-smoothness of the inspected sample surfaces, we used cheap steel samples bought from the shelves of a popular retailer instead of perfectly lab-calibrated expensive samples for collecting our data. Steel samples containing through wall hole (TWH) circular defects of radii 2, 4, and 5 mm were used. Figure 3 shows one such sample.

FIG. 3.

Left subplot shows the schematic of steel samples used in the experiments. Samples were of dimension 20×20cm2 with a circular defect of radius r mm at the center; r equals 2 mm for small, 4 mm for medium, and 5mm for large defects. Right subplot shows one such sample.

FIG. 3.

Left subplot shows the schematic of steel samples used in the experiments. Samples were of dimension 20×20cm2 with a circular defect of radius r mm at the center; r equals 2 mm for small, 4 mm for medium, and 5mm for large defects. Right subplot shows one such sample.

Close modal

Figure 4 shows the capacitive sensing images across the three scenarios of experiment 1. Note that, due to coarseness of the samples in use, voltage readings collected under no lift-off has some fluctuations in the readings at the non-defective scan points. The leftmost plot of Fig. 4 depicts this variation in the voltage readings as non-defective scan points at the bottom right corner show higher voltage than the non-defective scan points in the top left. The defective points are, however, clearly detectable under no lift-off with much lower voltage than the voltage at almost all non-defective scan points. With increase in lift-off, the voltage across all the scan points decreases as shown in the legends of middle and right most plots of Fig. 4. While defective scan points still have voltages on the lower end, the difference in voltage between defective and non-defective scan points is much more decreased particularly under increased lift-off. This make classifying defective scan points based on the voltage readings much difficult as lift-off increases.

FIG. 4.

Image plot of voltage readings from the three different scenarios of experiment 1. The Y and X axes are normalized in [0,1].

FIG. 4.

Image plot of voltage readings from the three different scenarios of experiment 1. The Y and X axes are normalized in [0,1].

Close modal

The aforementioned patterns in the three scenarios of Fig. 4 is broadly seen also across the other experiments. Figures 13–15 in  Appendix B shows the images of the voltages for all the other cases. Comparing the leftmost (no lift-off scenario) images in Figs. 13–15, we observe that the voltage magnitudes vary across experiments as uncertainties change across regimes, but the general pattern of diminishing voltages due to increased lift-off was consistent across the experiments. These figures demonstrate that across all the considered regimes which encompass three different defect sizes, as lift off increases we see significant decrease in the gap between the averages of the voltage readings from the defective and the non-defective scan points. The reduction in this gap decreases the signal strength in classifying defective scan points. In Sec. III, we validate this phenomenon by considering a rigorous statistical framework and quantitatively study the impact of lift off on this gap.

We first define a few notations. For the jth scenario of the ith experiment, consider observing readings Yij={yl(i,j):lΛi} over grid Λi.The grids vary over experiments i when the step sizes are different. We next check if the change in the readings due to lift-offs can be explained by independent corruption in signal intensity across the grid. For this purpose, we consider the additive noise model: yl(i,j)=yl(i,1)+ϵl(i,j) for j=2,3. For any fixed 1i12 and j=1,2, we test the null hypothesis that ϵl(i,j) are independent across lΛi. All the 24 p-values are less than 105, and so, across all the scenarios, the null hypothesis that changes in the signal intensity is independent across locations was conspicuously rejected. This suggests that the corruption in the signals due to uncertainties in the data generation process is not homogeneous across all the scan-points inspected in the sample. We characterize the heterogeneity in the corruptions to the capacitive sensor signals in the following paragraph.

In Figs. 4 and 13–15, we witness that the lift-off based corruptions in the voltages recorded at the defective scan points is different from the corruptions in the background non-defective scan points. To explain this difference in corruption, we consider a bivariate normal mixture model that uses different voltage distributions for the non-defective and the defective scan points. Noting that for any experiment i, the defective scan points are invariant across scenarios j, let Θi={θl(i):lΛi} denote the defective scan points in experiment i, i.e., θl(i)=1 if l is a defective scan point and θl(i)=0 otherwise. For experiment i and scenario j, consider the following conditional model on the capacitive sensor readings: for the non-defective background points, consider the readings to be independent and identically distributed (i.i.d.) from a normal distribution,

(1)

and the readings from the defective scan points also are i.i.d. from a normal distribution with possibly different location and scale than the background,

(2)

We further assume that conditioned on Θi, yl(i,j) are independent. In practice, we do not observe Θi. But, for the experiments in Table I, we know the ground truth regarding the defect size and location. Using the knowledge that defects in all the experiments in Table I were circular and placed at the center of the grid, we used contour plot based thresholding on the capacitive sensor readings under no lift to estimate Θi by Θior. Note that, we use the suffix or in Θior to remind that it is not a solely data driven estimator of the support of the defective points but uses oracle information on defects. Figure 5 shows how Θior is estimated for experiment 1 based on the readings from case (1). As the readings in case (1) do not have corruptions due to lift-off Θior is well estimated. All the scan points in the grid which has voltage lower than 2.8 were classified as defects and those with voltage higher than 2.8 were classified as non-defects. As we had symmetric regular shaped singular defects in all the experiments in Table I, the simple thresholding scheme used here to calculate Θior was adequate in correctly pin pointing Θi. Figures 16–18 in  Appendix B shows the contour plots for the rest of the experiments.

FIG. 5.

Contour plot of left most image is superimposed on each of the images in Fig. 1. The threshold of 2.8 is selected based on the contour plot. All scan points in the leftmost image with voltage below 2.8 are donated as defective. The same defective points are marked in the middle and rightmost plot corresponding to 3 and 5 mm liftoffs based on experiment 1. The Y and X axes are normalized in [0,1].

FIG. 5.

Contour plot of left most image is superimposed on each of the images in Fig. 1. The threshold of 2.8 is selected based on the contour plot. All scan points in the leftmost image with voltage below 2.8 are donated as defective. The same defective points are marked in the middle and rightmost plot corresponding to 3 and 5 mm liftoffs based on experiment 1. The Y and X axes are normalized in [0,1].

Close modal

In Table II, based on Θior, we report the means Δ0, Δ1 and standard deviations σ0, σ1 for the defective and the background readings. Using these means and standard deviations, Fig. 6 shows the 95% confidence interval for the voltage readings at the defective and non-defective scan points. From Table II and Fig. 6, we see the following general pattern across the 12 experiments: (a) the difference in means of the voltage between the defective and non-defective area decreases with the increase in uncertainties in the data collection process. The defective scan points always have a lower average voltage; (b) the standard deviation of the signals from the defective areas, however, decreases with lift-off suggesting that though the mean difference in voltage readings between defects and non-defects decreases with lift-off, the readings at the defective point have lower variability and, thus, are more concentrated with lift-off. Figure 7 shows the distributions of the voltage readings at the defective and non-defective scan points for three cases of experiment 1. Figure 19 in  Appendix B shows the voltage reading of the other 11 experiments. Figures 7 and 19 also illustrate the above discussed patterns for changes in voltage readings with lift-off.

FIG. 6.

95% confidence intervals (CIs) for each of the three scenarios (no lift-off, 3 mm lift-off, and 5 mm lift-off, respectively) of the 12 experiments documented in Table II. The CIs of the non-defective scan-point readings are in light grey whereas the CIs of defective scan-point readings are in dark grey. The means of each distributions are marked in red.

FIG. 6.

95% confidence intervals (CIs) for each of the three scenarios (no lift-off, 3 mm lift-off, and 5 mm lift-off, respectively) of the 12 experiments documented in Table II. The CIs of the non-defective scan-point readings are in light grey whereas the CIs of defective scan-point readings are in dark grey. The means of each distributions are marked in red.

Close modal
FIG. 7.

Violin plots showing the distributions of voltage readings for the three cases of experiment I: (a) no lift-off, (b) moderate lift-off of 3 mm, and (c) high lift-off of 5 mm. The distributions of the readings from the defective scan points are shown in dark grey and the distributions of the non-defective areas are shown in light grey.

FIG. 7.

Violin plots showing the distributions of voltage readings for the three cases of experiment I: (a) no lift-off, (b) moderate lift-off of 3 mm, and (c) high lift-off of 5 mm. The distributions of the readings from the defective scan points are shown in dark grey and the distributions of the non-defective areas are shown in light grey.

Close modal
TABLE II.

The mean and standard deviation of the voltage readings of the non-defective (Δ0,σ0) and defective (Δ1,σ1) regions are reported for all experiments in Table I. The regions are detected based on readings with no lift-off.

ExperimentDefect proportion (%)ScenarioΔ0σ0Δ1σ1
1.67 −2.516 0.164 −3.046 0.165 
  −3.618 0.126 −3.845 0.052 
  −3.961 0.120 −4.106 0.029 
II 1.62 −2.354 0.161 −2.887 0.162 
  −3.528 0.140 −3.758 0.051 
  −3.933 0.122 −4.078 0.028 
III 1.65 −2.308 0.162 −2.836 0.170 
  −3.403 0.130 −3.624 0.053 
  −3.738 0.124 −3.880 0.028 
IV 1.60 −3.278 0.142 −3.523 0.047 
  −3.711 0.124 −3.863 0.024 
  −3.694 0.125 −3.844 0.023 
0.51 −0.643 0.150 −1.033 0.060 
  −1.959 0.133 −2.107 0.028 
  −2.377 0.099 −2.469 0.016 
VI 0.51 −0.899 0.140 −1.291 0.060 
  −2.051 0.124 −2.188 0.028 
  −2.393 0.105 −2.489 0.016 
VII 0.51 −0.634 0.148 −1.029 0.061 
  −1.761 0.137 −1.902 0.029 
  −2.052 0.116 −2.149 0.019 
VIII 0.50 −0.869 0.142 −1.268 0.054 
  −2.061 0.110 −2.199 0.027 
  −2.462 0.105 −2.551 0.017 
IX 0.51 −0.685 0.139 −1.081 0.055 
  −2.033 0.088 −2.177 0.027 
  −2.503 0.090 −2.598 0.015 
5.14 −2.886 0.142 −3.070 0.070 
  −3.171 0.104 −3.305 0.035 
  −3.110 0.106 −3.238 0.032 
XI 2.50 −1.834 0.116 −2.367 0.207 
  −2.832 0.095 −3.067 0.065 
  −3.107 0.084 −3.248 0.032 
XII 5.08 −2.919 0.096 −3.095 0.069 
  −3.347 0.108 −3.473 0.032 
  −3.498 0.086 −3.596 0.026 
ExperimentDefect proportion (%)ScenarioΔ0σ0Δ1σ1
1.67 −2.516 0.164 −3.046 0.165 
  −3.618 0.126 −3.845 0.052 
  −3.961 0.120 −4.106 0.029 
II 1.62 −2.354 0.161 −2.887 0.162 
  −3.528 0.140 −3.758 0.051 
  −3.933 0.122 −4.078 0.028 
III 1.65 −2.308 0.162 −2.836 0.170 
  −3.403 0.130 −3.624 0.053 
  −3.738 0.124 −3.880 0.028 
IV 1.60 −3.278 0.142 −3.523 0.047 
  −3.711 0.124 −3.863 0.024 
  −3.694 0.125 −3.844 0.023 
0.51 −0.643 0.150 −1.033 0.060 
  −1.959 0.133 −2.107 0.028 
  −2.377 0.099 −2.469 0.016 
VI 0.51 −0.899 0.140 −1.291 0.060 
  −2.051 0.124 −2.188 0.028 
  −2.393 0.105 −2.489 0.016 
VII 0.51 −0.634 0.148 −1.029 0.061 
  −1.761 0.137 −1.902 0.029 
  −2.052 0.116 −2.149 0.019 
VIII 0.50 −0.869 0.142 −1.268 0.054 
  −2.061 0.110 −2.199 0.027 
  −2.462 0.105 −2.551 0.017 
IX 0.51 −0.685 0.139 −1.081 0.055 
  −2.033 0.088 −2.177 0.027 
  −2.503 0.090 −2.598 0.015 
5.14 −2.886 0.142 −3.070 0.070 
  −3.171 0.104 −3.305 0.035 
  −3.110 0.106 −3.238 0.032 
XI 2.50 −1.834 0.116 −2.367 0.207 
  −2.832 0.095 −3.067 0.065 
  −3.107 0.084 −3.248 0.032 
XII 5.08 −2.919 0.096 −3.095 0.069 
  −3.347 0.108 −3.473 0.032 
  −3.498 0.086 −3.596 0.026 

In practice, we only observe Yij and not Θij. For scenario j of experiment i, the marginal distribution of Yij is

(3)

where πi=P(θl(i)=1) is the proportional size of the defect in the grid of inspected points. For any i and j, if we had known πi, Δ0(i,j), Δ1(i,j), σ0(i,j), σ1(i,j), then based on observing yl(i,j), we can estimate the support of defective scan points as a point-wise classification problem. In Table III (last three columns), using the oracle values of the above parameters from Table II, we report the Bayes misclassification error rate (BME) for point-wise classification. We also report the defect misdetection or false negative (FN) rate and the false positive rate (FP) of misclassifying non-defects as defects. BME is the sum of FP and FN.

TABLE III.

The false positive rate (FP) and false negative (FN) for classifying defective scan points based on the bivariate mixture model with parameters in Table II is reported across all the experiments. The weighted classification loss reduces the loss of FPs by the reported weights resulting in the FN rates to be lower than the unweighted classification loss. The Bayes misclassification error is reported for both the losses (column 9 for unweighted and column 5 for weighted).

Weighted classification loss (%)Unweighted classification loss (%)
ExperimentScenarioFPFNBMEWeightFPFNBME
3.0 9.0 3.1 0.0400 0.2 36.5 0.8 
 10.0 9.9 10.0 0.0400 0.0 100.0 1.7 
 15.3 8.5 15.2 0.0333 0.0 100.0 1.7 
II 2.4 9.6 2.5 0.0500 0.2 34.4 0.7 
 12.4 8.7 12.3 0.0333 0.0 100.0 1.6 
 15.1 8.4 15.0 0.0333 0.0 100.0 1.6% 
III 3.0 9.5 3.1 0.0400 0.2 36.4 0.8 
 12.2 9.3 12.2 0.0333 0.0 100.0 1.7 
 15.7 8.7 15.6 0.0333 0.0 100.0 1.7 
IV 10.0 8.1 9.9 0.0400 0.0 100.0 1.6% 
 12.3 8.3 12.3 0.0400 0.0 100.0 1.6 
 12.3 8.2 12.2 0.0400 0.0 100.0 1.6 
2.1 8.5 2.1 0.0400 0.0 100.0 0.5 
 15.4 8.2 15.4 0.0100 0.0 100.0 0.5 
 15.0 7.6 15.0 0.0100 0.0 100.0 0.5 
VI 1.4 8.1 1.5 0.0500 0.1 70.4 0.5 
 16.3 9.0 16.2 0.0100 0.0 100.0 0.5 
 14.6 7.2 14.6 0.0100 0.0 100.0 0.5 
VII 1.7 9.3 1.7 0.0500 0.0 100.0 0.5 
 16.5 9.0 16.5 0.0100 0.0 100.0 0.5 
 15.6 7.9 15.6 0.0100 0.0 100.0 0.5 
VIII 1.1 8.1 1.2 0.0667 0.1 68.2 0.5 
 15.2 8.5 15.2 0.0100 0.0 100.0 0.5 
 15.3 7.9 15.3 0.0100 0.0 100.0 0.5 
IX 1.1 8.0 1.1 0.0667 0.1 61.6 0.5 
 10.4 8.8 10.4 0.0133 0.0 100.0 0.5 
 12.9 9.8 12.9 0.0133 0.0 100.0 0.5 
27.1 8.4 26.2 0.0500 0.0 100.0 5.1 
 20.6 6.8 19.9 0.0667 0.0 100.0 5.1 
 20.6 6.6 19.9 0.0667 0.0 100.0 5.1 
XI 1.4 9.0 1.6 0.0667 0.1 19.1 0.6 
 6.0 8.7 6.1 0.0500 0.5 54.6 1.9 
 12.1 8.6 12.0 0.0500 0.0 100.0 2.5 
XII 20.3 8.1 19.7 0.0400 0.8 80.0 4.8 
 20.9 6.7 20.2 0.0667 0.0 100.0 5.1 
 21.8 7.1 21.0 0.0667 0.0 100.0 5.1 
Weighted classification loss (%)Unweighted classification loss (%)
ExperimentScenarioFPFNBMEWeightFPFNBME
3.0 9.0 3.1 0.0400 0.2 36.5 0.8 
 10.0 9.9 10.0 0.0400 0.0 100.0 1.7 
 15.3 8.5 15.2 0.0333 0.0 100.0 1.7 
II 2.4 9.6 2.5 0.0500 0.2 34.4 0.7 
 12.4 8.7 12.3 0.0333 0.0 100.0 1.6 
 15.1 8.4 15.0 0.0333 0.0 100.0 1.6% 
III 3.0 9.5 3.1 0.0400 0.2 36.4 0.8 
 12.2 9.3 12.2 0.0333 0.0 100.0 1.7 
 15.7 8.7 15.6 0.0333 0.0 100.0 1.7 
IV 10.0 8.1 9.9 0.0400 0.0 100.0 1.6% 
 12.3 8.3 12.3 0.0400 0.0 100.0 1.6 
 12.3 8.2 12.2 0.0400 0.0 100.0 1.6 
2.1 8.5 2.1 0.0400 0.0 100.0 0.5 
 15.4 8.2 15.4 0.0100 0.0 100.0 0.5 
 15.0 7.6 15.0 0.0100 0.0 100.0 0.5 
VI 1.4 8.1 1.5 0.0500 0.1 70.4 0.5 
 16.3 9.0 16.2 0.0100 0.0 100.0 0.5 
 14.6 7.2 14.6 0.0100 0.0 100.0 0.5 
VII 1.7 9.3 1.7 0.0500 0.0 100.0 0.5 
 16.5 9.0 16.5 0.0100 0.0 100.0 0.5 
 15.6 7.9 15.6 0.0100 0.0 100.0 0.5 
VIII 1.1 8.1 1.2 0.0667 0.1 68.2 0.5 
 15.2 8.5 15.2 0.0100 0.0 100.0 0.5 
 15.3 7.9 15.3 0.0100 0.0 100.0 0.5 
IX 1.1 8.0 1.1 0.0667 0.1 61.6 0.5 
 10.4 8.8 10.4 0.0133 0.0 100.0 0.5 
 12.9 9.8 12.9 0.0133 0.0 100.0 0.5 
27.1 8.4 26.2 0.0500 0.0 100.0 5.1 
 20.6 6.8 19.9 0.0667 0.0 100.0 5.1 
 20.6 6.6 19.9 0.0667 0.0 100.0 5.1 
XI 1.4 9.0 1.6 0.0667 0.1 19.1 0.6 
 6.0 8.7 6.1 0.0500 0.5 54.6 1.9 
 12.1 8.6 12.0 0.0500 0.0 100.0 2.5 
XII 20.3 8.1 19.7 0.0400 0.8 80.0 4.8 
 20.9 6.7 20.2 0.0667 0.0 100.0 5.1 
 21.8 7.1 21.0 0.0667 0.0 100.0 5.1 

As the proportions of defective scan points were quite small in all the experiments in Table I, the FN rate (column 8 of Table III) were quite high as compared to FP rate (column 7 of Table III). However, in these NDE applications, FN are more harmful than FP, and so, we next consider minimizing the FP rate when the FN rate is controlled below 10%.

With the FN rate of defective scan points controlled below 10%, we can detect at least 90% of the defective scan points which would be adequate to identify defect shapes provided the FP rate is also low. In order to minimize FP rate keeping FN rate controlled below 10%, we consider minimizing the weighted classification loss as in the compound decision theory setup of Ref. 39, where wij0 is the relative weight of a false positive. For the jth case of the ith experiment, the weighted classification loss for a data driven estimator Θ^(i,j) of the support of the defective points Θi is

(4)

When wij=1, we get back the usual classification loss. The Bayes estimator Θ^w(i,j) that minimizes the weighted classification loss is, for location lΛi set Θ^w(i,j)[l]=1, if

(5)

and 0 otherwise. Note that for presentational ease, we have dropped the suffix (i,j) from the mean and standard deviation parameters in the above expression. The detailed calculations are provided in  Appendix A. We calculate the FP and FN rates of this weighted Bayes estimator Θ^w(i,j) as the weights wij are varied over a wide range of values in (0,1].

We consider weights in S={1,1/5,1/10,,1/50}{1/75,1/100,1/200}. In Table III, we report the optimal weights wij in S that minimizes the FP rate keeping the corresponding FN rate below 10%. Note that in all the cases the sum of the FP and FN rates for the optimal weighted Bayes classifier is more than that of the BME of the unweighted case which minimizes the unweighted misclassification error. However, the optimal weighted Bayes classifier controls the defect misdetection rate below 10% of the defective scan points. From Table III, we witness that FP rates of the optimal weighted Bayes classifier increases with lift-off but are usually well controlled below 15%. In experiments X and XII, the FP rates are a little higher. However, they also have high FP rates under no lift-off which suggest that better FP vs FN trade-offs can be obtained by slightly increasing the FN rates in those two experiments. Note that the classified defective scan points are clustered together at the defect location. If the wrongly classified non-defective scan points are randomly distributed over the grid they can be easily weeded off and reclassified as long as their proportion is controlled based on local density of defective scans. In Sec. IV, we undertake such an approach that leverages the spatial connectedness of the defective scan points. The low FN and FP error rates for the weighted Bayes estimator in Table III (columns 3 and 4) shows that capacitive sensing-based voltage readings contains significant information to detect defect under lift-offs considered in cases 2 and 3 of Table I. In Fig. 8, the False Positive (FP) vs False Negative (FN) curves for the three scenarios of experiment 1 are shown as the weights are varied in the weighted classification loss. For each curve, the right extreme point denotes the FP and FN proportions for the unweighted loss whereas the left extreme point corresponds to weight 1/200. The points that corresponds to optimal weights that minimize FP controlling FN below 10% are marked in red. Across all defect sizes considered in Table I, we validate that there is a consistent increase in Bayes misclassification error rates with increase of lift-offs. Note that, as the data used in this paper was solely collected by laboratory experiments, the voltage signals captured by our capacitive sensors had very high signal strength (as shown in Figs. 7 and 19) when there were no uncertainties. All our validation results are based on ground truth that is recovered from signals without uncertainties.

FIG. 8.

The False Positive (FP) vs False Negative (FN) curves for the three scenarios (orange, blue, and green for cases I, II, and III, respectively) of experiment I as the weights are varied in the weighted classification loss. For each curve, the right extreme point denotes the FP and FN proportions for the unweighted loss whereas the left extreme point corresponds to weight 1/200. The points marked in red are those corresponding to the optimal weights which minimize FP controlling FN below 10%. They are reported in the first three rows of Table III.

FIG. 8.

The False Positive (FP) vs False Negative (FN) curves for the three scenarios (orange, blue, and green for cases I, II, and III, respectively) of experiment I as the weights are varied in the weighted classification loss. For each curve, the right extreme point denotes the FP and FN proportions for the unweighted loss whereas the left extreme point corresponds to weight 1/200. The points marked in red are those corresponding to the optimal weights which minimize FP controlling FN below 10%. They are reported in the first three rows of Table III.

Close modal

The errors reported in Table III mark the theoretically possible lower limit on the error rates that any decision rule that conduct point-wise classification without borrowing spatial information can produce. To obtain the results in Table III, we have used knowledge of Θior which is not available for real world NDE applications. In this section, we develop a solely data based estimator which does not use any prior knowledge on defect size, shapes, or locations. Achieving the error rates in Table III with solely data driven estimators is not always possible as it would require highly accurate estimates of πi, Δ0(i,j), Δ1(i,j), σ0(i,j), and σ1(i,j). Estimating πi, Δ1(i,j), and σ0(i,j) in the presence of noise is extremely difficult and lead to significant estimation error.40–43 To mitigate these estimation problems, we develop a spatially adaptive procedure to estimate Θi.

First, a naïve thresholding estimator Θ^T[l]=1{yl<cα} is considered, which classifies the lth scan point as defect if its voltage is below the αth quantile of voltage readings over the grid. In Table IV in  Appendix B, we report the performance of Θ^T for a range of α values. As α increases, coverage rates of the defect (defined as 1-FN) increase at the cost of increased FP. We observe that while Θ^T works very well for case 1 when there is no lift-off, its performance drastically deteriorates as signals are corrupted due to increase in lift-off uncertainty. We next consider denoising the signals collected under lift-off uncertainty by using the adaptive shrinkage estimator proposed in Ref. 44. For denoising, we use the denoiseR package of Ref. 45 that estimates a low-rank signal assuming Gaussian noise by minimizing the risk estimation criterion in Refs. 44 and 46. On the denoised data, we again tried the elementwise thresholding described above. We observed that under lift-off its performance is not much better than naïve thresholding of voltage signals (see Table IV in  Appendix B). Next, we consider a spatial adaptive estimator that is based on local variances in the capacitive imaging data. For voltage readings collected over any grid Λ, consider a symmetric neighborhood Nl around the lth scan point in the grid. Nl is a set of grid points that includes the scan point and its neighbors. Consider the variance vl associated with Nl,

(6)

We compute variance over each point in the grid. For Nl, we consider a square window with 25 scan points centered around the lth scan point. We use this local variance filter on the voltage readings and estimate the support of defective grid points as

(7)

where cβ1=quantile(yl:lΛ;β1) and cβ2=quantile(νl:lΛ;1β2) are the quantiles of the voltage readings and local variances. We universally set the values of β1 and β2 at 40%. Note that, as we use large values of the tuning parameters β1 and β2, the estimator is not very sensitive to minor changes in the values. Figure 9 shows Θ^S for case 3 of experiment I. Consider the decomposition,

where Θ^S1[l]=1{yl<cβ1} and Θ^S2[l]=1{vl>cβ2}.

FIG. 9.

For case 3 of experiment I, we present the efficiency of the intermediate estimators that are involved in our proposed spatially adaptive classifier Θ^TSA of the defective scan points. Each plot represent the confusion matrix of the corresponding classifier with brown showing the true defect scan points that the classifier had correctly classified, light yellow showing the true non-defective scan points it correctly classified, orange displaying false positives (which are non-defective scan points wrongly classified as defects by the classifier), and dark yellow showing false negatives (which are defective scan points wrongly classified as non-defects). The axes in all the plots display scaled normalized locations in [0,1]. The four plots from top left to bottom right show the gradual filtering process that is involved in the intermediate steps of the proposed Θ^TSA estimator. Here, β1=β2=0.4 and α=0.025.

FIG. 9.

For case 3 of experiment I, we present the efficiency of the intermediate estimators that are involved in our proposed spatially adaptive classifier Θ^TSA of the defective scan points. Each plot represent the confusion matrix of the corresponding classifier with brown showing the true defect scan points that the classifier had correctly classified, light yellow showing the true non-defective scan points it correctly classified, orange displaying false positives (which are non-defective scan points wrongly classified as defects by the classifier), and dark yellow showing false negatives (which are defective scan points wrongly classified as non-defects). The axes in all the plots display scaled normalized locations in [0,1]. The four plots from top left to bottom right show the gradual filtering process that is involved in the intermediate steps of the proposed Θ^TSA estimator. Here, β1=β2=0.4 and α=0.025.

Close modal
TABLE IV.

The False Positive (FP) proportion and coverage rate for threshold estimators on raw voltage data, on low-rank model based denoised signals, and by our proposed spatially adaptive estimator when the threshold parameter α was set at 0.05 and 0.10, respectively, are reported. The results for the 36 sub-experiments of Table II are presented in order across rows.

Threshold at 0.05th quantileThreshold at 0.10th quantile
Raw voltageLow rank denoisedSpatial denoisingRaw voltageLow rank denoisedSpatial denoising
FPCoverageFPCoverageFPCoverageFPCoverageFPCoverageFPCoverage
0.039 1.000 0.034 1.000 0.008 0.996 0.083 1.000 0.083 1.000 0.081 1.000 
0.041 0.990 0.034 0.987 0.008 0.993 0.089 0.997 0.084 0.996 0.077 1.000 
0.045 0.582 0.044 0.582 0.008 0.879 0.092 0.663 0.092 0.663 0.083 0.996 
0.035 1.000 0.034 1.000 0.009 0.998 0.090 1.000 0.084 1.000 0.081 1.000 
0.043 0.952 0.035 0.936 0.008 0.994 0.088 0.983 0.084 0.979 0.083 1.000 
0.047 0.552 0.044 0.549 0.009 0.799 0.104 0.639 0.093 0.616 0.083 1.000 
0.038 1.000 0.034 1.000 0.008 0.995 0.087 1.000 0.084 1.000 0.082 1.000 
0.036 0.956 0.035 0.956 0.008 0.992 0.088 0.986 0.084 0.986 0.082 1.000 
0.048 0.485 0.044 0.472 0.009 0.968 0.095 0.562 0.094 0.556 0.082 1.000 
0.043 1.000 0.034 1.000 0.011 0.949 0.085 1.000 0.084 1.000 0.085 0.972 
0.051 0.662 0.040 0.624 0.011 0.842 0.088 0.772 0.088 0.772 0.082 0.884 
0.049 0.601 0.041 0.548 0.010 0.843 0.092 0.740 0.089 0.712 0.085 0.898 
0.046 1.000 0.045 1.000 0.019 0.999 0.095 1.000 0.095 1.000 0.095 0.999 
0.049 0.423 0.048 0.418 0.019 0.989 0.102 0.540 0.097 0.529 0.094 0.999 
0.052 0.250 0.049 0.221 0.021 0.709 0.120 0.386 0.098 0.362 0.080 0.999 
0.047 1.000 0.045 1.000 0.019 1.000 0.100 1.000 0.095 1.000 0.093 1.000 
0.054 0.406 0.048 0.380 0.020 0.997 0.103 0.523 0.097 0.509 0.094 1.000 
0.053 0.196 0.050 0.196 0.021 0.732 0.107 0.329 0.099 0.324 0.090 1.000 
0.046 1.000 0.045 1.000 0.019 1.000 0.095 1.000 0.095 1.000 0.094 1.000 
0.051 0.155 0.049 0.135 0.022 0.187 0.100 0.449 0.098 0.449 0.094 0.999 
0.065 0.011 0.052 0.009 0.022 0.517 0.109 0.038 0.102 0.037 0.091 1.000 
0.045 1.000 0.045 1.000 0.018 1.000 0.097 1.000 0.095 1.000 0.095 1.000 
0.049 0.547 0.047 0.530 0.017 0.993 0.100 0.694 0.097 0.693 0.092 1.000 
0.055 0.022 0.050 0.013 0.020 0.609 0.100 0.139 0.099 0.132 0.093 1.000 
0.045 1.000 0.045 1.000 0.020 1.000 0.098 1.000 0.095 1.000 0.095 1.000 
0.049 0.989 0.045 0.989 0.018 0.999 0.107 0.995 0.097 0.995 0.094 1.000 
0.055 0.515 0.049 0.515 0.018 0.771 0.109 0.627 0.101 0.591 0.094 1.000 
0.000 1.000 0.000 0.987 0.000 0.925 0.062 1.000 0.058 1.000 0.043 0.956 
0.020 0.682 0.016 0.660 0.001 0.458 0.086 0.842 0.058 0.817 0.059 0.792 
0.024 0.560 0.023 0.549 0.000 0.470 0.083 0.726 0.066 0.685 0.057 0.752 
0.000 1.000 0.000 0.956 0.000 0.948 0.054 1.000 0.048 1.000 0.060 0.987 
0.016 0.650 0.016 0.649 0.000 0.472 0.071 0.655 0.066 0.655 0.063 0.639 
0.021 0.611 0.018 0.606 0.001 0.460 0.089 0.648 0.066 0.645 0.066 0.639 
0.000 1.000 0.000 0.985 0.000 0.942 0.054 1.000 0.049 1.000 0.053 0.984 
0.024 0.548 0.022 0.546 0.003 0.390 0.078 0.711 0.064 0.700 0.063 0.723 
0.027 0.629 0.020 0.592 0.004 0.356 0.077 0.726 0.065 0.695 0.052 0.710 
Threshold at 0.05th quantileThreshold at 0.10th quantile
Raw voltageLow rank denoisedSpatial denoisingRaw voltageLow rank denoisedSpatial denoising
FPCoverageFPCoverageFPCoverageFPCoverageFPCoverageFPCoverage
0.039 1.000 0.034 1.000 0.008 0.996 0.083 1.000 0.083 1.000 0.081 1.000 
0.041 0.990 0.034 0.987 0.008 0.993 0.089 0.997 0.084 0.996 0.077 1.000 
0.045 0.582 0.044 0.582 0.008 0.879 0.092 0.663 0.092 0.663 0.083 0.996 
0.035 1.000 0.034 1.000 0.009 0.998 0.090 1.000 0.084 1.000 0.081 1.000 
0.043 0.952 0.035 0.936 0.008 0.994 0.088 0.983 0.084 0.979 0.083 1.000 
0.047 0.552 0.044 0.549 0.009 0.799 0.104 0.639 0.093 0.616 0.083 1.000 
0.038 1.000 0.034 1.000 0.008 0.995 0.087 1.000 0.084 1.000 0.082 1.000 
0.036 0.956 0.035 0.956 0.008 0.992 0.088 0.986 0.084 0.986 0.082 1.000 
0.048 0.485 0.044 0.472 0.009 0.968 0.095 0.562 0.094 0.556 0.082 1.000 
0.043 1.000 0.034 1.000 0.011 0.949 0.085 1.000 0.084 1.000 0.085 0.972 
0.051 0.662 0.040 0.624 0.011 0.842 0.088 0.772 0.088 0.772 0.082 0.884 
0.049 0.601 0.041 0.548 0.010 0.843 0.092 0.740 0.089 0.712 0.085 0.898 
0.046 1.000 0.045 1.000 0.019 0.999 0.095 1.000 0.095 1.000 0.095 0.999 
0.049 0.423 0.048 0.418 0.019 0.989 0.102 0.540 0.097 0.529 0.094 0.999 
0.052 0.250 0.049 0.221 0.021 0.709 0.120 0.386 0.098 0.362 0.080 0.999 
0.047 1.000 0.045 1.000 0.019 1.000 0.100 1.000 0.095 1.000 0.093 1.000 
0.054 0.406 0.048 0.380 0.020 0.997 0.103 0.523 0.097 0.509 0.094 1.000 
0.053 0.196 0.050 0.196 0.021 0.732 0.107 0.329 0.099 0.324 0.090 1.000 
0.046 1.000 0.045 1.000 0.019 1.000 0.095 1.000 0.095 1.000 0.094 1.000 
0.051 0.155 0.049 0.135 0.022 0.187 0.100 0.449 0.098 0.449 0.094 0.999 
0.065 0.011 0.052 0.009 0.022 0.517 0.109 0.038 0.102 0.037 0.091 1.000 
0.045 1.000 0.045 1.000 0.018 1.000 0.097 1.000 0.095 1.000 0.095 1.000 
0.049 0.547 0.047 0.530 0.017 0.993 0.100 0.694 0.097 0.693 0.092 1.000 
0.055 0.022 0.050 0.013 0.020 0.609 0.100 0.139 0.099 0.132 0.093 1.000 
0.045 1.000 0.045 1.000 0.020 1.000 0.098 1.000 0.095 1.000 0.095 1.000 
0.049 0.989 0.045 0.989 0.018 0.999 0.107 0.995 0.097 0.995 0.094 1.000 
0.055 0.515 0.049 0.515 0.018 0.771 0.109 0.627 0.101 0.591 0.094 1.000 
0.000 1.000 0.000 0.987 0.000 0.925 0.062 1.000 0.058 1.000 0.043 0.956 
0.020 0.682 0.016 0.660 0.001 0.458 0.086 0.842 0.058 0.817 0.059 0.792 
0.024 0.560 0.023 0.549 0.000 0.470 0.083 0.726 0.066 0.685 0.057 0.752 
0.000 1.000 0.000 0.956 0.000 0.948 0.054 1.000 0.048 1.000 0.060 0.987 
0.016 0.650 0.016 0.649 0.000 0.472 0.071 0.655 0.066 0.655 0.063 0.639 
0.021 0.611 0.018 0.606 0.001 0.460 0.089 0.648 0.066 0.645 0.066 0.639 
0.000 1.000 0.000 0.985 0.000 0.942 0.054 1.000 0.049 1.000 0.053 0.984 
0.024 0.548 0.022 0.546 0.003 0.390 0.078 0.711 0.064 0.700 0.063 0.723 
0.027 0.629 0.020 0.592 0.004 0.356 0.077 0.726 0.065 0.695 0.052 0.710 

Figure 9 (top left plot) shows Θ^S1 contains almost all defective locations along with a large proportion of non-defective scan points. These non-defective scan points are not uniformly distributed over the grid which would have made our task of filtering them easier but are concentrated in the top left quadrant. This is a result of lift-off uncertainty compounded with probe tilting. Applying the filter Θ^S2 based on the local variances (Fig. 9, top right plot) helps in cleaning these non-defective scan points in Θ^S1, keeping most of the defective scan points. The resultant estimator Θ^S (see Fig. 9, bottom left image) still contains sporadic non-defective scan points. As these non-defective scan points are not concentrated unlike in Θ^S1, we can easily weed them off by using a uniform local filter. For this purpose, we consider the averaging filter FA which divides the grid Λ into 100 rectangles and finds the average of signals over each of those rectangles. We apply FA on Θ^S. This smooth (Fig. 9, bottom right plot) out the sporadic non-defective misclassified scan points keeping the concentrated mass of correctly detected scan points intact. Finally, using these as the local weights on the original voltage readings, we consider the spatially adaptive estimator: Θ^SA=FA(Θ^S)Y~, where Y~=max(Y)Y.

We conduct a binary classification of grid points as defects by thresholding Θ^SA and considering Θ^TSA[l]=1{Θ^SA[l]>t1α}, where t1α=quantile(Θ^SA[l]:lΛ;α). In Table IV, we report the FP and coverage rates of the proposed estimator. As shown in recent works,43 using spatially adaptive local weights can tremendously help in recognizing underlying patterns from very noisy imaging data. Here, thresholding the spatially adaptive estimator FA(Θ^S1Θ^S2) instead of the original signal or low rank model based denoised signal, we observe that we can reasonably filter out the effects of probe tilt and lift-off based uncertainties.

Table IV in  Appendix B and Fig. 10 compares the efficacy of the proposed method with naïve thresholding and low rank signal denoising. They show that using the aforementioned spatially adaptive estimator based on local features of the imaging data, it is possible to attain appreciable coverage and low false positive rate in detecting defective scan points under the varied uncertainties considered in the paper. From Table IV and Fig. 10, we observe that thresholding based on raw voltage signals is effective under no lift-off but provides very low coverage as lift-off increases. Low rank model based denoising also does not help. It reduces false positive rate a bit but suffers from insufficient coverage. The proposed spatial adaptive estimator, however, can provide considerably high coverage with very tight control on the false positive rates. As such with false positive rates controlled at only 2%, the spatially adaptive estimator provides sufficient coverage under lift-off in all experiments barring VII, X, XI, and XII. When the limit on the false positive proportion is raised to 10%, the spatially adaptive estimator provides sufficient coverage across all the 36 regimes considered. It is to be noted that there is discernable difference in the performance of the estimator (as shown in Fig. 10) between regimes 130 and 3136. This is due to the fact that regimes 3136 contain defects of considerably different characteristics than those in the previous regimes. Overall, the results in Table IV and Fig. 10 show that the spatially adaptive estimator based on local features of the imaging data can attain appreciable coverage and low false positive rate in detecting defective scan points on the grid under the different lift-off uncertainties considered in the paper.

FIG. 10.

We plot the False Positive (FP) proportions and Coverage rates across the 36 sub-experiments (plotted maintaining the order of rows reported in Table II). We present the performance of threshold estimators on raw voltage data (in black), on low-rank model based denoised signals (in red), and by our proposed spatially adaptive estimator (in green) when the threshold parameter was set at 0.05 (top panel) and 0.10 (bottom panel).

FIG. 10.

We plot the False Positive (FP) proportions and Coverage rates across the 36 sub-experiments (plotted maintaining the order of rows reported in Table II). We present the performance of threshold estimators on raw voltage data (in black), on low-rank model based denoised signals (in red), and by our proposed spatially adaptive estimator (in green) when the threshold parameter was set at 0.05 (top panel) and 0.10 (bottom panel).

Close modal

In (6), we considered fixed sized rectangular neighborhoods. They can be easily replaced by Gaussian kernel filters that weighs each scan points in the lattice inversely proportional to the exponent of its distance from the scan point under study. In this context, the proposed filters are similar to signal processing techniques that uses Gaussian filters. However, we found that the local variances in these data sets contain important information, and so, we filtered the variances. In this attribute, we differ from traditional filtering where filtering is done based on signal intensity or amplitude and not variances.

All the experiments considered in Table IV contained a single defect. Next, we consider two experiments (MI and MII) where we had multiple coexisting defects. Each experiment had three scenarios as before pertaining to no, 3, and 5 mm lift off along with other embedded uncertainties. Figure 11 shows the structure of the defects. There were 4 and 12 defects of varying shapes and sizes in experiments MI and MII, respectively.

FIG. 11.

Schematic shows the defects used in experiments MI (left) and MII (right). MI had four circular defects a, b, c and d of diameters 48, 20, 8, and 1 mm respectively. In MII along with these four defects, there are four equilateral triangle defects (marked as e, f, g, h) with sides 42, 28, 10 and 2 mm, respectively, and square defects (marked as i, j, k, l) whose length of sides are 30, 20, 6, and 3 mm, respectively.

FIG. 11.

Schematic shows the defects used in experiments MI (left) and MII (right). MI had four circular defects a, b, c and d of diameters 48, 20, 8, and 1 mm respectively. In MII along with these four defects, there are four equilateral triangle defects (marked as e, f, g, h) with sides 42, 28, 10 and 2 mm, respectively, and square defects (marked as i, j, k, l) whose length of sides are 30, 20, 6, and 3 mm, respectively.

Close modal

Figure 12 shows the image plots of the voltage readings collected by our capacitive sensing probe.

FIG. 12.

Image plot of voltage readings from the three different scenarios of experiments MI (top) and MII (bottom), respectively. The Y and X axes are normalized in [0,1].

FIG. 12.

Image plot of voltage readings from the three different scenarios of experiments MI (top) and MII (bottom), respectively. The Y and X axes are normalized in [0,1].

Close modal
FIG. 13.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments II–V (across rows), respectively.

FIG. 13.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments II–V (across rows), respectively.

Close modal
FIG. 14.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments VI–IX (across rows), respectively.

FIG. 14.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments VI–IX (across rows), respectively.

Close modal
FIG. 15.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments X–XII (across rows), respectively.

FIG. 15.

Image plot of voltage readings from three different scenarios (across columns from left to right, we have no lift-off, 3 mm lift-off, and 5 mm lift-off) of experiments X–XII (across rows), respectively.

Close modal
FIG. 16.

The contour plot of the left most image is superimposed on each of later images for experiments II–V. The thresholds are selected based on the contour plots of the left most images in each experiment.

FIG. 16.

The contour plot of the left most image is superimposed on each of later images for experiments II–V. The thresholds are selected based on the contour plots of the left most images in each experiment.

Close modal
FIG. 17.

The contour plot of the left most image is superimposed on each of later images for experiments VI–IX. The thresholds are selected based on the contour plots of the left most images in each experiment.

FIG. 17.

The contour plot of the left most image is superimposed on each of later images for experiments VI–IX. The thresholds are selected based on the contour plots of the left most images in each experiment.

Close modal
FIG. 18.

The contour plot of the left most image is superimposed on each of later images for experiments X–XII. The thresholds are selected based on the contour plots of the left most images in each experiment.

FIG. 18.

The contour plot of the left most image is superimposed on each of later images for experiments X–XII. The thresholds are selected based on the contour plots of the left most images in each experiment.

Close modal
FIG. 19.

Distribution of voltage readings in the form of violin plot for the three cases of experiments (II–XII) under no lift-off, 3 mm (medium) lift-off, and 5 mm (high) lift-off, respectively.

FIG. 19.

Distribution of voltage readings in the form of violin plot for the three cases of experiments (II–XII) under no lift-off, 3 mm (medium) lift-off, and 5 mm (high) lift-off, respectively.

Close modal

Table V in  Appendix B, we report the false positive proportion and defect detection coverage rate that was achieved by applying our proposed spatial denoising algorithm on the capacitive sensing data. We observe that our proposed method not only outperforms the aforementioned competing algorithms but also obtains high coverage and low false positive rate in detecting the multiple, disconnected defect clusters.

TABLE V.

False positive and coverage rates for coexisting defects. Here, α was set at 0.10.

Raw voltageLow rank denoisedSpatial denoising
ExperimentScenarioFPCoverageFPCoverageFPCoverage
 0.034 1.000 0.033 1.000 0.036 1.000 
MI 0.034 0.999 0.034 0.999 0.052 0.979 
 0.055 0.822 0.054 0.819 0.076 0.887 
 0.057 1.000 0.056 1.000 0.055 1.000 
MII 0.057 0.999 0.056 0.999 0.055 1.000 
 0.073 0.870 0.069 0.866 0.080 0.930 
Raw voltageLow rank denoisedSpatial denoising
ExperimentScenarioFPCoverageFPCoverageFPCoverage
 0.034 1.000 0.033 1.000 0.036 1.000 
MI 0.034 0.999 0.034 0.999 0.052 0.979 
 0.055 0.822 0.054 0.819 0.076 0.887 
 0.057 1.000 0.056 1.000 0.055 1.000 
MII 0.057 0.999 0.056 0.999 0.055 1.000 
 0.073 0.870 0.069 0.866 0.080 0.930 

We develop a robust NDE methodology using short waves based capacitive sensing. Our proposed method is cheap as it does not use complex expensive sensors. It does not need any expert supervision and can produce instantaneous defect identification under highly robust conditions by using spatially adaptive denoising. Moreover, the apparatus involved for data collection is light and flexible and can be implemented across a wide range of NDE applications.

We demonstrate high efficacy of the proposed NDE methodology in non-smooth samples under varied lift-off uncertainties. We provide a Gaussian mixture model based disciplined analysis to check when defective scan points in the sample can be detected with high accuracy and with low false positive misclassification rates for non-defective scan points. This would lead to identification of defect characteristics such as location, size, and shape with desired accuracy. The results are validated on experimental data on samples containing single defect of three different sizes as well as on samples containing multiple coexisting defects. We show that the proposed spatially adaptive denoising algorithm leverages the spatial contiguity of defective scan points and can well identify defects based on capacitive sensing data that is convoluted with the uncertainties due to lift-off, probe tilt, and low sampling rates.

The performance of the defect defection algorithm is sensitive to the threshold and neighborhood size parameters. Larger threshold value will increase coverage but also result in higher false discovery rates. The threshold value needs to be set based on applications. Here, we have considered fixed neighborhood shapes and sizes. Data driven choices of neighborhood size can further improve our methodology. This can be incorporated by using Gaussian kernel density based filters instead of the square grids used in Sec. V and then to adaptively choose the kernel bandwidth. We plan to develop theoretical support for selecting these hyper-parameter in future.

Our method would need some manual adaptation under varying weather conditions and humidity levels. We need the RF source to be set near the resonant frequency of the probe which was estimated to be 5 MHz for our lab experiments. If the medium between the sample and the probe is much different from our experimental setup, then the RF source needs to manually reset to the corresponding resonant frequency. The problem can be solved by using multiple probes each operating at different frequencies. However, it would require multi-parametric estimation. We aim to study it in our future works. Another potentially useful direction of research would be to study the applicability of the spatially adaptive estimator developed in this paper for analyzing NDE data collected with uncertainties by other NDE methods.

The work was supported in part by the DOT/PHMSA, Slow Crack Growth Evaluation of Vintage Polyethylene Pipes (Core Research Program) with Award No. DTPH5615T00007. The work was also partially supported by the DOT/PHMSA CAAP program with Award Nos. 693JK31850007CAAP and 693JK32050002CAAP. We thank the associate editor and two referees for constructive suggestions to improve the paper.

The authors have no conflicts to disclose.

S. Mukherjee: Conceptualization (equal); Formal analysis (lead); Methodology (lead); Software (lead); Writing – original draft (equal). D. Kumar: Data curation (equal); Methodology (equal); Software (equal). L. Udpa: Methodology (equal); Resources (equal); Supervision (equal). Y. Deng: Conceptualization (equal); Funding acquisition (equal); Methodology (equal); Project administration (equal); Supervision (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

By Theorem 2 of Ref. 39, we know that for experiment i and case j, the weighted classification loss is minimized by the estimator,

where ϕ(y|Δ,σ) denotes the normal density with mean Δ and standard deviation σ evaluated at y. Taking logarithm on both sides of the inequality above, Θ^w(i,j)[l] reduces to

The weighted Bayes classifier in the paper follows directly from above by noting that logϕ(y|Δ,σ)=(yσ)2/2/σ2logσ. The FP and FN error rates of this estimator are given by

We calculate these by Monte Carlo simulations and then optimize the weights for each (i,j) pairs. The results are reported in Table III which also contains the FP and FN rates for unweighted classification loss when wij=1.

Tables IV and V described in Sec. V are presented here. We also provide pictorial description of the capacitive sensing data for all the 12 regimes. In the main paper, Figs. 4, 5, and 7 describe the different attributes of the data from the three cases of experiment I. In Figs. 13–19, we provide the corresponding plots for the other experiments. Figures 13–15 supplements Fig. 4, Figs. 16–18 supplements Fig. 5, and Fig. 19 corresponds to Fig. 7 in the main paper. These figures display reproducible signaling patterns that are described in the main paper across the concerned experiments. These tables and figures are provided after the references.

1.
D. E.
Bray
and
R. K.
Stanley
,
Nondestructive Evaluation: A Tool in Design, Manufacturing, and Service
(
CRC Press
,
2018
).
2.
N.
Gloria
,
M.
Areiza
,
I.
Miranda
, and
J.
Rebello
, “
Development of a magnetic sensor for detection and sizing of internal pipeline corrosion defects
,”
NDT&E Int.
42
,
669
677
(
2009
).
3.
S.
Mukherjee
,
X.
Huang
,
L.
Udpa
, and
Y.
Deng
, “NDE based cost-effective detection of obtrusive and coincident defects in pipelines under uncertainties,” in 2019 Prognostics and System Health Management Conference (PHM-Paris) (IEEE, 2019), pp. 297–302.
4.
S.
Mukherjee
,
X.
Huang
,
V. T.
Rathod
,
L.
Udpa
, and
Y.
Deng
, “Defects tracking via NDE based transfer learning,” in 2020 IEEE International Conference on Prognostics and Health Management (ICPHM) (IEEE, 2020), pp. 1–8.
5.
R.
Wagner
,
O.
Goncalves
,
A.
Demma
, and
M.
Lowe
, “
Guided wave testing performance studies: Comparison with ultrasonic and magnetic flux leakage pigs
,”
Insight: Non-Destr. Test. Cond. Monit.
55
,
187
196
(
2013
).
6.
E.
Mohseni
,
H.
Habibzadeh Boukani
,
D.
Ramos França
, and
M.
Viens
, “
A study of the automated eddy current detection of cracks in steel plates
,”
J. Nondestr. Eval.
39
,
6
(
2020
).
7.
G. G.
Diamond
and
D. A.
Hutchins
, “A new capacitive imaging technique for NDT,” in Proceedings of European Conference on NDT (European Federation for Non-Destructive Testing, 2006), pp. 1–8.
8.
T.
Gan
,
D.
Hutchins
,
D.
Billson
, and
D.
Schindel
, “
The use of broadband acoustic transducers and pulse-compression techniques for air-coupled ultrasonic imaging
,”
Ultrasonics
39
,
181
194
(
2001
).
9.
H.
Kriesz
, “
Radiographic NDT—A review
,”
NDT Int.
12
,
270
273
(
1979
).
10.
S. M.
Haugland
, “
Fundamental analysis of the remote-field eddy-current effect
,”
IEEE Trans. Magn.
32
,
3195
3211
(
1996
).
11.
T. V.
Venkatsubramanian
and
B. A.
Unvala
, “
An AC potential drop system for monitoring crack length
,”
J. Phys. E Sci. Instrum.
17
,
765
(
1984
).
12.
J. W.
Wilson
and
G. Y.
Tian
, “
3D magnetic field sensing for magnetic flux leakage defect characterisation
,”
Insight: Non-Destr. Test. Cond. Monit.
48
,
357
359
(
2006
).
13.
T.
Shibata
,
H.
Hashizume
,
S.
Kitajima
, and
K.
Ogura
, “
Experimental study on ndt method using electromagnetic waves
,”
J. Mater. Process. Technol.
161
,
348
352
(
2005
).
14.
D.
Jiles
, “
Review of magnetic methods for nondestructive evaluation
,”
NDT Int.
21
,
311
319
(
1988
).
15.
E.
Mohseni
and
M.
Viens
, Sensitivity of Eddy Current Signals to Probe’s Tilt and Lift-off While Scanning Semi-elliptical Surface Notches—A Finite Element Modeling Approach (Curran Associates, Inc., 2017).
16.
J.
Lim
,
Data Fusion for NDE Signal Characterization
(
Iowa State University
,
2001
).
17.
G. Y.
Tian
and
A.
Sophian
, “
Reduction of lift-off effects for pulsed eddy current NDT
,”
NDT&E Int.
38
,
319
324
(
2005
).
18.
C.
Mandache
,
M.
Brothers
, and
V.
Lefebvre
, “
Time domain lift-off compensation method for eddy current testing
,”
NDT.net
10
,
1
7
(
2005
).
19.
R.
Gongzhang
,
M.
Li
,
T.
Lardner
, and
A.
Gachagan
, “Robust defect detection in ultrasonic nondestructive evaluation (NDE) of difficult materials,” in 2012 IEEE International Ultrasonics Symposium (IEEE, 2012), pp. 467–470.
20.
J.
Buckley
and
H.
Loertscher
, “
Frequency considerations in air-coupled ultrasonic inspection
,”
Insight
41
,
696
699
(
1999
).
21.
D.
Palumbo
,
R.
Tamborrino
,
U.
Galietti
,
P.
Aversa
,
A.
Tatì
, and
V.
Luprano
, “
Ultrasonic analysis and lock-in thermography for debonding evaluation of composite adhesive joints
,”
NDT&E Int.
78
,
1
9
(
2016
).
22.
C.
Meola
,
S.
Boccardi
,
G.
Carlomagno
,
N.
Boffa
,
F.
Ricci
,
G.
Simeoli
, and
P.
Russo
, “
Impact damaging of composites through online monitoring and non-destructive evaluation with infrared thermography
,”
NDT&E Int.
85
,
34
42
(
2017
).
23.
Y.
Xiong
,
Z.
Liu
,
C.
Sun
, and
X.
Zhang
, “
Two-dimensional imaging by far-field superlens at visible wavelengths
,”
Nano Lett.
7
,
3360
3365
(
2007
).
24.
A. N.
Abdalla
,
K.
Ali
,
J. K.
Paw
,
D.
Rifai
, and
M. A.
Faraj
, “
A novel eddy current testing error compensation technique based on mamdani-type fuzzy coupled differential and absolute probes
,”
Sensors
18
,
2108
(
2018
).
25.
H.
Hoshikawa
,
K.
Koyama
, and
H.
Karasawa
, “A new eddy current surface probe without lift-off noise,”
AIP Conf. Proc
.
657
, 413 (2003).
26.
A.
McNab
and
J.
Thomson
, “
An eddy current array instrument for application on ferritic welds
,”
NDT&E Int.
28
,
103
112
(
1995
).
27.
B.
Rao
,
B.
Raj
,
T.
Jayakumar
, and
P.
Kalyanasundaram
, “
An artificial neural network for eddy current testing of austenitic stainless steel welds
,”
NDT&E Int.
35
,
393
398
(
2002
).
28.
D.
He
and
M.
Yoshizawa
, “
Dual-frequency eddy-current NDE based on high-Tc rf SQUID
,”
Physica C
383
,
223
226
(
2002
).
29.
Y.
Fu
,
M.
Lei
,
Z.
Li
,
Z.
Gu
,
H.
Yang
,
A.
Cao
, and
J.
Sun
, “
Lift-off effect reduction based on the dynamic trajectories of the received-signal fast fourier transform in pulsed eddy current testing
,”
NDT&E Int.
87
,
85
92
(
2017
).
30.
Y.
He
,
M.
Pan
,
F.
Luo
, and
G.
Tian
, “
Reduction of lift-off effects in pulsed eddy current for defect classification
,”
IEEE Trans. Magn.
47
,
4753
4760
(
2011
).
31.
T.
Chen
and
N.
Bowler
, “
A rotationally invariant capacitive probe for materials evaluation
,”
Mater. Eval.
70
,
161
172
(
2012
).
32.
B.
Auld
,
A.
Clark
,
S.
Schaps
, and
P.
Heyliger
, “Capacitive probe array measurements and limitations,” in
Review of Progress in Quantitative Nondestructive Evaluation
(Springer, Boston, MA, 1993), pp. 1063–1070.
33.
T.
Chen
and
N.
Bowler
, “
Design of interdigital spiral and concentric capacitive sensors for materials evaluation
,”
AIP Conf. Proc.
1511
,
1593
1600
(
2013
).
34.
X.
Yin
,
C.
Li
,
Z.
Li
,
W.
Li
, and
G.
Chen
, “
Lift-off effect for capacitive imaging sensors
,”
Sensors
18
,
4286
(
2018
).
35.
M.
Morozov
,
W.
Jackson
, and
S.
Pierce
, “
Capacitive imaging of impact damage in composite material
,”
Compos. B Eng.
113
,
65
71
(
2017
).
36.
D.
Kumar
,
S.
Karuppuswami
,
Y.
Deng
, and
P.
Chahal
, “
A wireless shortwave near-field probe for monitoring structural integrity of dielectric composites and polymers
,”
NDT&E Int.
96
,
9
17
(
2018
).
37.
S.
Mukherjee
,
X.
Huang
,
L.
Udpa
, and
Y.
Deng
, “A Kriging based fast and efficient method for defect detection in massive pipelines using magnetic flux leakages,” in ASME International Mechanical Engineering Congress and Exposition (American Society of Mechanical Engineers, 2020), Vol. 84669, p. V014T14A010.
38.
S.
Mukherjee
,
X.
Huang
,
L.
Udpa
, and
Y.
Deng
, “
A Kriging-based magnetic flux leakage method for fast defect detection in massive pipelines
,”
J. Nondestr. Eval. Diagn. Progn. Eng. Syst.
5
,
011002
(
2022
).
39.
W.
Sun
and
T. T.
Cai
, “
Oracle and adaptive compound decision rules for false discovery rate control
,”
J. Am. Stat. Assoc.
102
,
901
912
(
2007
).
40.
O.
Muralidharan
, “
An empirical Bayes mixture method for effect size and false discovery rate estimation
,”
Ann. Appl. Stat.
4
,
422
438
(
2010
).
41.
Y.
Benjamini
, “
Discovering the false discovery rate
,”
J. R. Stat. Soc. B
72
,
405
416
(
2010
).
42.
B.
Efron
,
Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction
(
Cambridge University Press
,
2012
), Vol. 1.
43.
T. T.
Cai
,
W.
Sun
, and
Y.
Xia
, “
LAWS: A locally adaptive weighting and screening approach to spatial multiple testing
,”
J. Am. Stat. Assoc.
116
,
1
14
(
2021
).
44.
E. J.
Candes
,
C. A.
Sing-Long
, and
J. D.
Trzasko
, “
Unbiased risk estimates for singular value thresholding and spectral estimators
,”
IEEE Trans. Signal Process.
61
,
4643
4657
(
2013
).
45.
J.
Josse
,
S.
Sardy
, and
S.
Wager
, arXiv:1602.01206 (2016).
46.
J.
Josse
and
S.
Sardy
, “
Adaptive shrinkage of singular values
,”
Stat. Comput.
26
,
715
724
(
2016
).