Deep learning-based fringe projection profilometry (FPP) shows potential for challenging three-dimensional (3D) reconstruction of objects with dynamic motion, complex surface, and extreme environment. However, the previous deep learning-based methods are all supervised ones, which are difficult to be applied for scenes that are different from the training, thus requiring a large number of training datasets. In this paper, we propose a new geometric constraint-based phase unwrapping (GCPU) method that enables an untrained deep learning-based FPP for the first time. An untrained convolutional neural network is designed to achieve correct phase unwrapping through a network parameter space optimization. The loss function of the optimization is constructed by following the 3D, structural, and phase consistency. The designed untrained network directly outputs the desired fringe order with the inputted phase and fringe background. The experiments verify that the proposed GCPU method provides higher robustness compared with the traditional GCPU methods, thus resulting in accurate 3D reconstruction for objects with a complex surface. Unlike the commonly used temporal phase unwrapping, the proposed GCPU method does not require additional fringe patterns, which can also be used for the dynamic 3D measurement.
I. INTRODUCTION
Fringe projection profilometry (FPP) has been widely used in high-precision three-dimensional (3D) measurements.1–3 FPP usually requires at least three phase-shifted fringe patterns to calculate the desired phase by using the phase-shifting algorithm.4 The calculated phase is discontinuous and wrapped in a range of (−π, π].5–7 Temporal phase unwrapping (TPU) is often preferred to unwrapping the phase by using the gray-code,8 multi-frequency,9 or phase-code patterns.10 These additional patterns obviously reduce the image acquisition speed, thus reducing the 3D measurement speed.11 The best way is to unwrap the phase without using any additional patterns, e.g., the spatial phase unwrapping (SPU)12 and the geometric constraint-based phase unwrapping (GCPU).13 The SPU unwraps the phase following a path guided by a parameter map14 and often fails for a complex surface caused by the local error propagation.15 The GCPU unwraps the phase with the assistance of geometric constraints provided by an additional camera, e.g., the wrapped phase, the epipolar geometry, the measurement volume, and the phase monotonicity.16
In GCPU, two cameras are selected to construct a stereo vision system, which determines the fringe order based on the established image correspondence between them.13 For each pixel in the first camera, the 3D data can be calculated from the absolute phase retrieved from the candidate fringe orders and then transformed into the second camera to construct the coarse image correspondence. The unique fringe order can be determined by refining the coarse correspondence based on the feature similarity.17,18 Both the low accuracy of the system calibration and unsatisfactory features will result in wrong fringe orders.16,19 The measurement volume constrain is used to improve the GCPU, but it requires selecting complex empirical parameters20 and restricted measurement system,21 especially for dense fringe patterns.22 Deep learning has recently been introduced to GCPU for flexible phase unwrapping due to its advantage of data driven and the ability of extracting high-level features, which can directly map the captured fringe patterns to the desired fringe order without complex parameter selection and system restriction.23
Traditional deep learning-based GCPU methods construct a supervised neural network for the nonlinear mapping between the inputted patterns and the ground-truth.23 The supervised deep learning requires a large number of samples and ground-truths for the network training, which is very time-consuming and especially difficult for some special cases,24,25 e.g., live rabbit hearts26 and robotic flapping wings.27 In addition, the trained neural network is difficult to apply for scenes that are different from the training scene.28–31 In contrast, the untrained deep learning optimizes the network parameter space by minimizing the loss function constructed using the Huygens–Fresnel principle,32 the physical process of computational ghost imaging,33 or the first Born and multi-slice scattering models.34 Because the selected physical model conforms to the physical mechanism, the optimized network parameter space could be adaptive for different scenes because it does not rely on the trained model compared with the supervised deep learning.35,36 Thus, the untrained deep learning is becoming increasingly important and useful for phase imaging,32 quantitative phase microscopy,37 snapshot compressive imaging,36 compressive lensless photography,35 computational ghost imaging,33 and diffraction tomography.34
In this paper, we propose a new untrained deep learning-based GCPU (UGCPU) that can transform the calculated phase and fringe background into the desired fringe order. The proposed UGCPU can achieve reliable phase unwrapping for FPP under different scenes, which enables an untrained deep learning-based FPP for the first time. The provided experiments verify that UGCPU outperforms the previous GCPU methods with higher robustness and works well even for objects with complex surfaces and dynamic motion.
II. PRINCIPLE OF GCPU
In FPP, a set of phase-shifted sinusoidal fringe patterns is first projected by the projector and then captured by the camera.38,39 The intensity of these captured fringe patterns can be expressed as
where (x, y) denotes the image coordinate; N denotes phase steps; a, b, and φ are the fringe background, amplitude, and phase, respectively; and δn is the phase shift amount. The phase shifts are designed as
In practice, the fringe amplitude is often used to remove invalid pixels around shadows and discontinuities,15 which can be calculated as40
The desired phase can be calculated by using a least-squares algorithm41 as follows:
Because the calculated phase is wrapped in the range of (−π, π], a phase unwrapping process is required to determine the fringe order K for each fringe period. The absolute phase can be obtained as42
For simplicity, we omit the notation (x, y) hereafter.
Unlike the temporal phase unwrapping,43 GCPU obtains the absolute phase by adding another camera to replace the additional fringe patterns, i.e., the GCPU-based FPP system requires one projector and two cameras. In particular, GCPU first constructs the coarse image correspondence by using the candidate fringe orders21 and then determines the unique fringe order with the assistance of the feature similarity, e.g., the intensity and phase distribution.44
The typical GCPU-based FPP system is illustrated in Fig. 1, which includes two cameras c1 and c2 and a projector p. In the world coordinate system,45 the point w is selected from the surface of the object, which reflects the projected light onto the image point of c1. The wrapped phase of can be calculated from these captured patterns using Eq. (4). There could be f candidate fringe orders for due to the fringe frequency f. By reusing Eq. (5) f times, f absolute phases are calculated, and from this, will correspond to f projector coordinates,46 i.e., . When combining with one projector coordinate , the 3D coordinate of c1 is calculated by combining the system parameters of the camera c1 and projector p, which can be transformed into the camera coordinate of c2 by using the system parameters of the camera c2 and projector p.20,47 By repeating the above process f times, each pixel of c1 will correspond to f pixels of c2, i.e., . Then, by comparing the wrapped phase or grayscale difference of and , the most similar pixel is determined and the unique fringe order corresponding to can be selected.
III. PRINCIPLE OF UNTRAINED DEEP LEARNING-BASED GCPU
Unlike the supervised deep learning, the untrained deep learning can be applied for different scenes without advance training.37 By constructing the loss function based on the general physical model that is independent of the scene,35 the network parameters (i.e., weights, bias, and convolutional kernels) can be optimized for each input through iterated learning.48 An untrained deep learning-based GCPU method is proposed, which obtains the desired fringe order by constructing an untrained convolutional neural network (UCNNet). The constructed UCNNet directly outputs the fringe order from the calculated phase and fringe background.
The flowchart of UGCPU is provided in Fig. 2. For simplicity, the fringe order K, calculated phase φ, and fringe background a with the superscript c1 or c2 refer to the first or second camera, respectively. Thus, , , , and are selected as the input. The two sets of data provided by c1 and c2 are inputted into UCNNet through two weight-sharing pipelines, respectively. The two pipelines have the same network parameter space.49 UCNNet directly outputs fringe orders and , respectively. The parameter space of UCNNet is initialized and iteratively optimized to obtain accurate and by minimizing a loss function constructed by following the 3D, structural, and phase consistency, which can be calculated using the inputted data and outputted fringe orders and denoted as Loss1, Loss2, and Loss3, respectively. Here, the optimization process is defined as , where Loss represents the loss function and θ denotes the network parameter space. For simplicity, we omit the notation hereafter. After the network space optimization, the median filter is introduced to remove impulse noise caused by the discrete sampling and the error of UCNNet,50 and the desired absolute phases and can be retrieved from and , respectively.
A. Network Structure of UCNNet
Inspired by efficient residual factorized,51,52 a U-shaped structure is selected to construct UCNNet. One pipeline of the constructed UCNNet is illustrated in Fig. 3. The resolution of inputted patterns is selected as W × H. Both the down-sampling and up-sampling of the input are used to increase the network efficiency and data mining ability.53 The network structure includes the convolution, downsampler block, non-bottleneck-1D, upsampler block, and transposed convolution.52,53 In addition, an activation module is added to restrict the range of fringe orders by linearizing the inputted data.
Taking the first camera as an example, the depth of field of the GCPU-based FPP system can be first pre-determined,21 and a finite range of the candidate fringe orders, , can be selected for each pixel.20 Then, the activation module linearizes the inputted data into . The activation module linearizes the inputted data of c2 by the same way.
B. Loss function of UCNNet
As aforementioned, the loss function of UCNNet is constructed by following the 3D, structural, and phase consistency. Thus, the loss function consists of three types of losses, i.e., 3D, structural, and phase. The structure chart of the loss function is provided in Fig. 4, and the three types of losses are illustrated below.
The 3D consistency is defined as that the 3D reconstructed by one camera and the projector should be the same as the 3D reconstructed by the two cameras. As shown in Fig. 1, the 3D coordinate of can be obtained from and when combined with the corresponding projector pixel . In each epoch of the iterative optimization, the 3D coordinate is updated and transformed into a coordinate in the perspective of c2 by using the principle of GCPU illustrated in Sec. II. In the same coordinate system of c1, the 3D coordinate of can also be calculated by combining with the corresponding pixel of c2. The corresponding pixel is denoted as , which is also updated in each epoch and determined by searching the coordinate with the same absolute phase on the epipolar line by using the phase matching method.54–58 When the above two sets of 3D data are the same, the coordinates and should be consistent. Thus, taking the horizontal axis of and , the 3D consistency-based loss function can be described as
where M denotes the number of pixels; the subscript m represents the mth pixel; the values of and corresponding to are and , respectively; and (·) gives the horizontal axis of one pixel. It should be noted that invalid pixels are neglected according to the fringe amplitude. Because the 3D coordinate calculated by the two cameras is related to the absolute phase of all pixels on the epipolar line, the 3D consistency constraints the outputted fringe order of the entire image.
The structural consistency constrains and to have the same structural similarity (SSIM).59 The structural consistency-based loss function can be described as
where the value of corresponding to is . Because the value of SSIM is related to the grayscale of all pixels on the local window,59 the structural consistency constraints the outputted fringe order of the local area of the entire image.
The phase consistency constrains and to have the same calculated phase. The phase consistency-based loss function can be described as
where the value of corresponding to is . The difference between and should be limited in the range of [0, π] due to the 2π period of the wrapped phase. The phase consistency constraints the outputted fringe order pixel-to-pixel.
Because the two cameras should have the same loss function, the loss function of UCNNet is defined as
where λ1, λ2, and λ3 are the weights of Loss1, Loss2, and Loss3, respectively. According to our experimental results, the empirical values of these weights are λ1 = 1, λ2 = 20, and λ3 = 5.
It should be noted that the inputted data captured by the two cameras are not exactly the same due to the non-common field of view, and the calibration accuracy also influences the above consistencies. The optimized fringe orders will have a slight deviation, which should be less than 0.5 to avoid wrong phase unwrapping.
IV. EXPERIMENT
The GCPU-based FPP system includes a TI DLP6500 projector with a resolution of 1920 × 1080, two Basler acA640-750μm cameras with a resolution of 640 × 480, and two suited lens with a focal length of 8 mm. The measured objects are placed in front of the system with a distance around 1 m.
To verify the proposed UGCPU, a dataset containing 1025 scenes is constructed with 960 simple scenes of toys with a smooth and white surface and 65 complex scenes containing objects with a complex surface. For each scene, six sets of three-step phase-shifted patterns combined with gray code-based patterns are projected and then captured.60 It should be noted that these additional gray code-based patterns are used to obtain the ideal absolute phase and result in the ground-truth 3D for the verification. In order to be comprehensive, different fringe periods, i.e., 25, 30, 35, 40, 50, and 60 pixels, are selected, and the range of the candidate fringe orders contains about 12, 10, 8, 7, 6, and 5 candidate fringe orders according to the measurement system, respectively. For each period, the number of gray code-based patterns is determined according to the projector resolution.60 Both traditional GCPU (TGCPU)22 and supervised deep learning-based GCPU (SGCPU)23 are used for the comparison.
UCNNet is implemented by using Python and the framework of PyTorch on a PC with an Intel Core i9-7900X central processing unit (CPU) (3.30 GHz), 32 GB of RAM, and the GeForce GTX Titan RTX (NVIDIA). The constructed UCNNet selects the adaptive moment estimation optimizer61 to optimize the parameter space with the batch size and learning rate of 1 and 0.001, respectively. The experimental results of UGCPU are available as “Dataset” in “Data Availability.”
A. Verification of UGCPU
From the constructed dataset, four scenes, namely, a simple white toy, complex hand gesture, surgical mask, and power strip, are selected to verify UGCPU. The fringe period is selected as 30 pixels. The experimental results of the simple toy and other three complex objects are shown in Figs. 5 and 6, respectively.
Experimental results for simple white toys: (a) inputted data, (b) outputted fringe orders, (c) desired absolute phases, and (d) 3D data.
Experimental results for simple white toys: (a) inputted data, (b) outputted fringe orders, (c) desired absolute phases, and (d) 3D data.
Experimental results for complex objects: (a) hand gesture, (b) surgical mask, and (c) power strip.
Experimental results for complex objects: (a) hand gesture, (b) surgical mask, and (c) power strip.
Figure 5(a) shows the inputted , , , and . After the iterative optimization, UCNNet directly outputs and as shown in Fig. 5(b). By reusing Eq. (5) twice, the two absolute phases, and , are retrieved from and , respectively. The retrieved absolute phases shown in Fig. 5(c) are smooth and correct as the ideal absolute phase obtained by the robust gray code-based method. When combined with system parameters, the desired 3D shapes of c1 and c2 can be reconstructed from and , respectively. The 3D shape of c1 is selected and shown in Fig. 5(d). The resulting 3D shape is smooth and does not contain sparkles, wrinkles, or serious height jumps caused by unwrapping errors.50 Thus, UGCPU can achieve correct phase unwrapping. Figures 6(a)–6(c) illustrate the results of hand gesture, surgical mask, and power strip, respectively. The left, middle, and right columns show the inputted data, the outputted fringe orders, and the reconstructed 3D shapes, respectively. The proposed UGCPU also achieves correct phase unwrapping and results in accurate 3D reconstruction even for complex scenes.
UGCPU is also tested in dynamic scenes with the fringe period of 60 pixels. A moving hand and two moving white toys are measured and shown in Figs. 7(a) and 7(b), respectively. The first and second rows show the UCNNet input of and and the UGCPU resulted 3D, respectively. For the moving hand and toys, 212 and 86 scenes of them are successively captured, reconstructed, and shown in Multimedia views 1 and 2, respectively. During the movement of the object, UGCPU achieves correct and robust phase unwrapping and thus reconstructs accurate 3D shapes for dynamic scenes.
Experimental results for the dynamic measurement: (a) hand gestures and (b) white toys. Multimedia views: (a) https://doi.org/10.1063/5.0069386.1 and (b) https://doi.org/10.1063/5.0069386.2
Experimental results for the dynamic measurement: (a) hand gestures and (b) white toys. Multimedia views: (a) https://doi.org/10.1063/5.0069386.1 and (b) https://doi.org/10.1063/5.0069386.2
We emphasize that UGCPU takes more computational time compared with the previous GCPUs, which requires more epochs for the optimization caused by the random initialization of the network parameter space. A fine-tuning strategy is selected to accelerate the computation by replacing the random initialization with pre-trained model initialization.62 The pre-trained model is the network parameter space of UCNNet pre-optimized using other scenes, which can achieve more reasonable initialization for new scenes. For 20 white toy scenes with the fringe period of 60 pixels, the loss curves with random initialization and pre-trained model initialization are plotted during increasing the epoch in Figs. 8(a) and 8(b), respectively. The former requires 57 min with 400 epochs, and the latter can reduce the computational time to 43 s with 5 epochs by using 80 white toy scenes to obtain the pre-trained model. Each scene only takes around 2 s for the 3D reconstruction, which is acceptable for the practical measurement.
Loss curve of UGCPU: (a) optimization with random initialization and (b) optimization with pre-trained model initialization.
Loss curve of UGCPU: (a) optimization with random initialization and (b) optimization with pre-trained model initialization.
B. Reliability comparison between the proposed and previous GCPUs
The proposed untrained deep learning-based, supervised deep learning-based, and traditional GCPUs are compared under 800 simple scenes of white toys selected from the constructed dataset. To be specific, 500, 150, and 150 scenes are selected as the training set, validation set, and testing set of SGCPU, respectively. These testing scenes are also used to optimize UCNNet and test TGCPU. For each scene, the retrieved absolute phase is subtracted from the ideal absolute phase obtained by the gray code-based method. An incorrect unwrapping rate (IUR) is defined as the rate of the number of pixels containing unwrapping errors to the whole size of valid areas.15 For clarity, the IUR is divided into three ranges of (0%, 0.1%), (0.1%, 1%), and (1%, 100%). The number of scenes in each range is calculated and provided in Table I.
Incorrect unwrapping rate of UGCPU, SGUPU, and TGCPU.
Methods . | IUR . | Period (pixel) . | |||||
---|---|---|---|---|---|---|---|
60 | 50 | 40 | 35 | 30 | 25 | ||
UGCPU | (0%, 0.1%) | 74 | 64 | 29 | 22 | 9 | 0 |
(0.1%, 1%) | 76 | 86 | 112 | 117 | 109 | 93 | |
(1%, 100%) | 0 | 0 | 9 | 11 | 32 | 57 | |
SGCPU | (0%, 0.1%) | 75 | 58 | 46 | 42 | 23 | 1 |
(0.1%, 1%) | 38 | 41 | 39 | 30 | 48 | 46 | |
(1%, 100%) | 37 | 51 | 65 | 78 | 79 | 103 | |
TGCPU | (0%, 0.1%) | 0 | 0 | 0 | 0 | 0 | 0 |
(0.1%, 1%) | 22 | 18 | 9 | 7 | 1 | 0 | |
(1%, 100%) | 128 | 132 | 141 | 143 | 149 | 150 |
Methods . | IUR . | Period (pixel) . | |||||
---|---|---|---|---|---|---|---|
60 | 50 | 40 | 35 | 30 | 25 | ||
UGCPU | (0%, 0.1%) | 74 | 64 | 29 | 22 | 9 | 0 |
(0.1%, 1%) | 76 | 86 | 112 | 117 | 109 | 93 | |
(1%, 100%) | 0 | 0 | 9 | 11 | 32 | 57 | |
SGCPU | (0%, 0.1%) | 75 | 58 | 46 | 42 | 23 | 1 |
(0.1%, 1%) | 38 | 41 | 39 | 30 | 48 | 46 | |
(1%, 100%) | 37 | 51 | 65 | 78 | 79 | 103 | |
TGCPU | (0%, 0.1%) | 0 | 0 | 0 | 0 | 0 | 0 |
(0.1%, 1%) | 22 | 18 | 9 | 7 | 1 | 0 | |
(1%, 100%) | 128 | 132 | 141 | 143 | 149 | 150 |
When a scene is in the range of (0%, 0.1%), the number of its incorrect pixels is around 100 pixels, which can be neglected, in practice, considering the whole size of the captured patterns.11 When a scene is in the range of (0.1%, 1%), the incorrect pixel number is acceptable because these incorrect pixels are discretely distributed and their unwrapping errors can be removed through filtering operation.63 However, for a scene with an incorrect rate beyond 1%, the incorrect pixel number is non-ignorable, and it is difficult to remove these unwrapping errors because these incorrect pixels are often connected or close to each other. For clarity, the curve of the number of scenes with the IUR beyond 1% is illustrated in Fig. 9.
Under the testing 150 scenes, the IUR of UGCPU is less than 1% when selecting a relatively large fringe period, e.g., 60 and 50 pixels. UGCPU with a large fringe period can achieve robust and correct phase unwrapping under different scenes. Accordingly, both SGCPU and TGCPU generate non-ignorable unwrapping errors even when selecting a large fringe period, e.g., their 37 and 128 scenes with the fringe period of 60 pixels have the IUR larger than 1%. From Table I and Fig. 9, the number of wrong scenes is increased when reducing the fringe period because the denser fringes and larger number of candidate fringe orders increase the difficulty of parameter optimization, network learning, and feature similarity comparison for UGCPU, SGCPU, and TGCPU, respectively. However, UGCPU still performs better than TGCPU and SGCPU obviously, which has less wrong scenes in each fringe period.
For clarity, three scenes with the fringe period of 30 pixels are reconstructed and shown in Figs. 10(a)–10(c), respectively. The first and second rows of Fig. 10 show the inputted data and the 3D data calculated by the ground-truth, UGCPU, SGCPU, and TGCPU, respectively. As shown in Fig. 10(a), UGCPU and SGCPU obtain the same smooth 3D data as the ground-truth. TGCPU contains small and scattered error pixels due to the fixed weights of feature similarities and a relatively large number of candidate fringe orders. As shown in Figs. 10(b) and 10(c), similar results are obtained for TGCPU. SGCPU generates a large number of error pixels because the trained model is not suitable for testing scenes. For UGCPU, the areas containing error pixels are marked by the white box. UGCPU may fail in some small areas located in a non-common field of view and a large perspective difference of the two cameras. It should be noted that, for some scenes with a relatively large number of error pixels, UGCPU can eliminate these scenes by checking the 3D loss value. For example, the 3D loss value is larger than 3 for scenes with the fringe period of 30 pixels.
3D shape reconstructed by selected GCPUs: (a)–(c) results of three different scenes of simple white toys.
3D shape reconstructed by selected GCPUs: (a)–(c) results of three different scenes of simple white toys.
C. Comparison of deep learning-based GCPUs under different scenes
First, the performance of the proposed UGCPU and SGCPU is compared by selecting 160 simple scenes of white toys and 65 complex scenes with the fringe period of 60 pixels from the constructed dataset. SGCPU is trained on 120 simple scenes of white toys, validated on 40 simple scenes of white toys, and tested on 65 complex scenes. The desired results can be obtained in validating scenes. The proposed UGCPU does not require the advance training and only uses the 65 testing scenes to optimize the parameter space. Three complex scenes of the hand gesture, surgical mask, and power strip are reconstructed and shown in Figs. 11(a)–11(c), respectively. The first, second, third, and fourth columns of Fig. 11 show the captured scene using and and 3D data of the ground-truth, UGCPU, and SGCPU, respectively. Compared with the smooth ground-truth 3D, SGCPU results in 3D shapes containing a large number of height jumps. These height jumps are related to unwrapping errors with wrong fringe order, and the IUR of almost all scenes is obviously larger than 1%. UGCPU results in the same smooth and correct 3D shape as that of the ground-truth shape.
Comparison between the proposed UGCPU and SGCPU for different scenes: (a) hand, (b) surgical mask, and (c) power strip.
Comparison between the proposed UGCPU and SGCPU for different scenes: (a) hand, (b) surgical mask, and (c) power strip.
Second, we compare UGCPU and SGUPU under limited scenes. From the constructed dataset, 160 simple white toy scenes with the fringe period of 60 pixels are measured. In detail, 20, 20, and 120 scenes are selected to train, validate, and test SGCPU, respectively. UGCPU only selects 20 scenes from the testing scenes to optimize the parameter space. Three different scenes are reconstructed and shown in Figs. 12(a)–12(c), respectively. When a small number of training scenes are used, the IUR of three-quarters of the scenes is larger than 1% for SGCPU, and serious height jumps are caused by wrong fringe order. UGCPU still performs robustly and obtains smooth 3D data because it does not rely on the advance training.
Comparison between the proposed UGCPU and SGCPU under limited scenes: (a)–(c) results of three different scenes of simple white toys.
Comparison between the proposed UGCPU and SGCPU under limited scenes: (a)–(c) results of three different scenes of simple white toys.
V. CONCLUSIONS
In this paper, we introduce the untrained deep learning to the commonly used FPP by proposing a new untrained geometric constraint-based phase unwrapping method. The proposed UGCPU is more robust than TGCPU and SGCPU, especially when a relatively large fringe period is selected. Although UGCPU requires a relatively complex optimization process, its computational time can be reduced to an acceptable level for practical applications. It is the first work of untrained deep learning-based work for FPP to our best knowledge. UGCPU does not require the additional fringe patterns for the phase unwrapping, which can be used for the dynamic and high-speed 3D measurement. The most important point is that UGCPU does not require advance training, which can be applied to various scenes. In the future, we will further introduce the untrained deep learning to the phase calculation of FPP.
ACKNOWLEDGMENTS
This work was sponsored by the National Natural Science Foundation of China (Grant Nos. 61727802 and 62031018) and the Jiangsu Provincial Key Research and Development Program (Grant No. BE2018126).
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
DATA AVAILABILITY
The data that support the findings of this study are openly available in figshare at https://figshare.com/articles/dataset/Dataset_zip/16438272.64