This work proposes a deep learning (DL)-based framework, namely Sim2Real, for spectral signal reconstruction in reconstructive spectroscopy, focusing on efficient data sampling and fast inference time. The work focuses on the challenge of reconstructing real-world spectral signals in an extreme setting where only device-informed simulated data are available for training. Such device-informed simulated data are much easier to collect than real-world data but exhibit large distribution shifts from their real-world counterparts. To leverage such simulated data effectively, a hierarchical data augmentation strategy is introduced to mitigate the adverse effects of this domain shift, and a corresponding neural network for the spectral signal reconstruction with our augmented data is designed. Experiments using a real dataset measured from our spectrometer device demonstrate that Sim2Real achieves significant speed-up during the inference while attaining on-par performance with the state-of-the-art optimization-based methods.
I. INTRODUCTION
Optical spectroscopy is a versatile technique for various scientific, industrial, and consumer applications. Recently, computational spectroscopy using reconstructive algorithms1–8 has been rapidly developed owing to its potential to enable a miniaturized spectrometer.9,10 A spectrometer encodes optical spectral information spatially or temporally and then measures the encoded information using a series of photodetectors. The transformation between the input signal x and the photo-detector readout y in a spectrometer design can be either linear or nonlinear. However, a linear design is typically preferred. An example of nonlinear encoding is observed in Fourier-transform infrared (FTIR) spectrometers, which produce an auto-correlation of the input through a variable time-delay interferometer. For a linear system, x and y can be represented using column vectors, while the mapping is represented by a responsivity matrix R such that y = Rx. For example, a monochromator spatially disperses different spectral components with a block-diagonal R matrix. In a reconstructive spectrometer, R is generally much more complex. Each photo-detector’s readout depends on all instead of a few spectral components. While the complex structure of R means a lot of computational resources are generally required to recover the spectrum x from the photo-detector readouts y, it also opens up new opportunities. These include the ability to tailor R to encode the spectral information directly to parameters of interest in the application, such as the spectral peak positions and the relative intensities between the peaks; the ability to increase the accuracy of spectral reconstruction by focusing on the most important spectral information even when y’s dimension is much smaller than x’s;11 and the ability to miniaturize the spectrometer.
A wealth of research has been conducted to increase spectral reconstruction’s efficiency and accuracy. The approaches can generally be categorized as optimization-based or data-driven. The optimization-based approaches formulate the inverse problem into a convex optimization problem in terms of the signal to be reconstructed. To deal with the non-uniqueness of the solution, different types of regularizations (e.g., nonnegativity, sparsity, and smoothness) have been considered.1,12,13 The effectiveness of these optimization-based techniques often hinges on the precise adjustment of regularization parameters and hyper-parameters, a task that can be challenging.14 Moreover, optimization-based approaches based on iterative procedures can be energy and computationally expensive for critical applications such as Internet-of-Things or time-sensitive tasks. Despite the challenges, miniaturized spectrometers based on these spectral-reconstruction techniques have reported good performance. For example, we recently developed a chip-scale spectrometer concept by integrating 16 semiconductor photo-detectors, each with a different spectral response.13 We designed the photo-detectors using nanostructured materials to minimize the dependence on the incident angle of light and, therefore, eliminated the need for any external optical components. The entire spectrometer, sans the electronic readout circuitry, is only a few micrometers thick. As the photodetector’s response was broadband, the device was able to recover basic spectral information, including the positions and relative intensities of peaks across the entire visible spectrum. The accuracy of reconstructing a multi-peak spectrum was 0.97% (RMS error) for locating the peak locations, comparable to a monochromator with twice the number of detectors. Enabled by the rich interaction between different spectral components in each photo-detector, we also showed that only seven detectors were sufficient for input signals with up to three peaks and had relatively smooth spectra.
In comparison, deep learning (DL)-based data-driven approaches8,15,16 showed the potential to address the challenges of an optimization-based approach. First, deep overparametrized networks trained by gradient descent methods tend to enforce the regularization implicitly,17 avoiding the need to adjust regularization parameters. Second, while training neural networks can be computationally expensive, reconstructing a new measurement during inference requires only a single forward pass through the network. In contrast, optimization-based approaches can be costly in terms of energy and computing resources during inference. Various data-driven methods of reconstructive spectroscopy have been developed, including utilizing a simple, fully connected network on a plasmonic spectral encoder on the device,8 minimizing potential noise before the reconstruction process on a colloidal quantum dot spectrometer,16 and a specifically tailored residual convolutional neural network aiming to improve the reconstruction performance.15
A significant challenge with the current deep learning methodologies is their dependence on extensive volumes of real, precisely annotated training data to achieve optimal performance.8,14 In a practical setting, gathering such real, labeled data for spectrometer reconstruction are both costly and time-intensive. Additionally, the actual labeled data pairs collected often contain considerable noise, which may result in poor performance during testing when used to train deep learning models.
In this work, we introduce a method to tackle these challenges in deep learning-based approaches for spectral reconstruction. As illustrated in Fig. 1, we propose a Sim2Real framework in which we train the deep learning models solely based on simulated datasets and then deploy the model on a real dataset. Our method contains the following two key components:
Hierarchical Data Augmentation (HDA): To mitigate the domain gap between simulated and real data, we introduce a data augmentation technique to perturb both the response matrix R and the encoded signal. Training the model with these data improves its robustness.
ReSpecNN: In line with the HDA, we developed a new lightweight network architecture specifically designed for the Sim2Real framework, which is applied to the spectral reconstruction problem with the aforementioned HDA approach.
Unlike conventional methods, marked by orange arrows in Fig. 1, which require collecting and training on real-world data, our strategy eliminates the need for extensive real-world data collection, only requiring a single measurement of the response matrix R. Moreover, by utilizing our augmented simulated training data, we effectively close the domain gap between simulated and real data, leading to high-quality spectral reconstruction during testing on real data.
The rest of the paper is organized as follows. In Sec. II, we introduce the mathematical formulation of the spectral reconstruction problem while identifying the limitations of an existing method. In Sec. III, we present our main contributions and discuss how they address the limitations of the existing method. In Sec. IV, we validate the performance of our proposed method on real-world data by comparing it with the state-of-the-art method and discussing the implications.
II. PROBLEM FORMULATION AND EXISTING METHODS
In this section, we introduce the mathematical formulations of the spectrum reconstruction problem. We also identify the challenges and limitations of existing optimization and deep learning methods.
A. Spectral reconstruction problem
As such, our goal is to recover x from observed y and R under the setting where the number of encoders K is much smaller than the number of sampled wavelengths ℓ. In other words, the system in Eq. (3) is highly under-determined with non-unique solutions.
B. Optimization-based approaches and limitations
Note that given the response matrix R, the reconstruction problem in (3) can be approached using (non-negative) least squares, as discussed in the introduction. Due to the ill-posedness of the problem, explicit regularization techniques are used to ensure unique solutions with desirable additional structures. Within optimization-based methods, the least-squares (LS) estimate fails to consider the signal’s nonnegativity, often leading to suboptimal solutions. This issue can be addressed by employing non-negative least squares (NNLS).1 To deal with an under-determined system, regularization strategies like ℓ1-norm, ℓ2-norm, or total-variation (TV) norm regularization have also been proven effective.12,13
However, solving these optimization problems can be time and memory consuming, and their performance is sensitive to the choice of the regularization hyperparameter, so they are often too expensive to deploy on chip-scaled devices.
C. Deep learning methods and challenges
Recently, deep learning-based approaches gained popularity due to their modeling flexibility and fast inference speed. However, a major bottleneck for effectively leveraging deep neural network models is gathering a large amount of real spectral data pairs , a task that is both expensive and time-consuming in reconstructive spectroscopy.
Therefore, based on the problem formulation in Eq. (3), previous approaches have focused on using the response matrix R to produce simulated spectral data pairs , aiming to mimic the distribution of real data obtained in laboratory settings. However, we observe that the simulated data and the real-measured data exhibit an undesirable phenomenon known as “domain shift,” leading to the poor performance of models trained solely on simulated data when applied to real ones. To provide a straightforward quantitative assessment of the domain shift in our problem, here we ran PCA on both simulated data and real data (preprocessed by log-min-max as mentioned in Sec. IV) to visualize the discrepancy via their two principal components. As depicted in Fig. 2, the real data present a much larger spread and variation along both axes compared to the simulated data, which demonstrated the existence of the domain shift.
Later in Sec. III, we delve into this issue and detail our approach to bridging the domain shift. For the reader’s convenience, we first review previous approaches in detail.
1. Training on simulated data
In practice, the simulated spectral signals can be viewed as the sum of Lorentzian distribution functions.13,15 Given the response matrix R, a simulated spectral signal and its corresponding encoded signal can be generated as follows:
Generate M single peak Lorentzian distribution functions independently with various standard deviations.
Sum and then normalize the heights of those Lorentzian curves within range [0, 1]. We denote the result distribution as .
Multiply with the response matrix R to produce the encoded signal .
Specifically, in the first step, each (Lorentzian) peak is characterized by three parameters: the mean μ, the width constant γ, and the intensity constant I. The parameters correspond to the peak location, spectral width, and intensity magnitude, respectively. Each parameter is sampled i.i.d uniformly from a set of ranges to be chosen to match the specific characteristics of the spectrometer device. Under our problem setting, the ranges are set to μ ∈ [0, 205], γ ∈ [15, 20], and I ∈ [0.25, 1], respectively.
2. Challenges in deploying trained models on real data
Equation (3) is a simplified model of the actual spectroscopy system. As such, the procedure in Sec. II C 1 for generating simulated spectral signals and their corresponding encoded signals may produce distributions that differ from the distribution of real encoded signals . This may be attributed to the circuit design, which, in reality, may introduce diverse types of unknown noise into the system, resulting in this distributional difference between real and simulated data. When applying machine learning algorithms, this difference, or “domain shift,” could cause degraded performance.
We illustrate this in Fig. 3, where we train a model exclusively on simulated data that can successfully reconstruct the spectrum given simulated data as input. However, the model’s performance degrades significantly on real-world data, even when the real-world and simulated data appear visually similar. Detailed empirical evidence is discussed in Sec. IV D.
D. Domain shift between “sim” and “real”
The challenges of “domain shift” are common for deep learning-based approaches. The “domain shift” or “domain gap” issue was first observed in transfer learning,18 where a model trained on a large source dataset is fine-tuned on a smaller, more specific dataset. To mitigate the performance degradation due to distributional discrepancy between source and target datasets, many techniques have been proposed, including domain adaptation,19–21 meta-learning,22 and few-shot learning.23,24 These techniques have been shown to be effective in reducing the domain gap in various fields, including computer vision25,26 and natural language processing.27–29
However, when applied to the spectroscopy reconstruction, those proposed methods fall short of solving the domain gap for the following reasons. First, we do not assume access to any data from the target distribution. Existing methods still require a certain amount of data from the targeted domain for fine-tuning, which is not applicable under our assumption. Second, most domain adaptation methods have been designed for classification-based tasks, making them less directly applicable to our spectral reconstruction problem in spectroscopy, which is fundamentally a (non-negative) regression problem. While a few existing results focus on practical regression problems,30,31 they often deal with simple regression problems rather than tackling the more challenging inverse problem setting that we consider. Third, regarding the topic of deep learning for inverse problems, for tasks like image reconstruction, existing research primarily leverages the inductive bias of neural network architectures suitable for image signals.32–34 In this work on spectral reconstruction, we deal with one-dimensional spectral signals, which require a different model design.
While deep learning approaches have become commonly used for spectral reconstruction problems, few studies tackle such domain shift issues directly. A recent work8 successfully reconstructed spectra with up to 14 peaks using a model trained on spectra with up to eight peaks under blind testing conditions; the training and test data are still drawn from similar conditions and distributions. Therefore, a specialized method is required to tackle the domain shift issue in solving the spectral reconstruction problem. We now introduce the proposed Sim2Real framework in the following.
III. OUR Sim2Real FRAMEWORK
In this section, we introduce the following Sim2Real framework to bridge the domain shift between the simulated encoded signal data and the data from real-world spectrometers. Our method tackles the domain shift by two key components: (i) hierarchical data augmentation for training data generation, and (ii) a lightweight network architecture designed for the spectrum reconstruction problem with our HDA.
A. Hierarchical data augmentation
Although we measure the response matrix R in advance and consider it to be fixed and known, our hierarchical data augmentation strategy acknowledges the potential uncertainty in our measurement of R and the encoded signal vector y to improve model robustness and minimize the domain gap. For every pair of simulated training data introduced in Sec. II C, we systematically introduce noise as outlined in Algorithm 1.
Input: |
: a simulated spectral signal from Sec. II C |
R: the response matrix |
p(R): the distribution of noise perturbation on R |
: the distribution of noise perturbation on |
S, T: the number of perturbations |
Output: |
: training data pairs. |
for s = 1, …, S do |
Sample Δs ∼ p(R) |
Rs = R + Δs |
for t = 1, …, T do |
Sample |
end for |
end for |
Input: |
: a simulated spectral signal from Sec. II C |
R: the response matrix |
p(R): the distribution of noise perturbation on R |
: the distribution of noise perturbation on |
S, T: the number of perturbations |
Output: |
: training data pairs. |
for s = 1, …, S do |
Sample Δs ∼ p(R) |
Rs = R + Δs |
for t = 1, …, T do |
Sample |
end for |
end for |
To illustrate this process, we have provided a visualization in Fig. 4 using an example of 3LED samples (number of peaks M = 3) explained in detail below. Instead of simply multiplying response matrix R, our hierarchical data augmentation extends each original simulated data sample to S × T augmented ones by adding different noises on the measured response matrix R (aquamarine green traces in the middle) and each intermediate augmented encoded signal (illustrated by only, light purple traces on the right), respectively.
To perturb the response matrix R, we simply add Gaussian noise with zero mean and variance related to entries of R. That is, for each noisy perturbation matrix Δs (s = 1, …, S), its (i,j)th entry is sampled i.i.d from the distribution with σij = α ⋅ Rij, where α is a hyperparameter controlling the intensity of perturbation on R. For perturbing the encoded signal y, we inject non-negative noise ɛt (t = 1, …, T). Here, each entry of ɛt is sampled i.i.d from the Gaussian distribution and then passed through the operator to enforce non-negativity. In practice, we determine those hyper-parameters through empirical experiments. Details can be found in Sec. IV A.
In the case above, we considered adding Gaussian noise to disrupt the data, which is simple yet effective in practice. However, should specific information about the noise be accessible under certain conditions, we can further refine both distributions p(R) and for the noise sampled on the response matrix and the intermediate augmented encoded signal, respectively. As a result, for each simulated ground truth spectrum input , S × T many corresponding augmented encoded signal vector are generated. This also demonstrates the generalizability and flexibility of our data augmentation method. By incorporating structured noise into the device-informed simulated data generation, we term this process Hierarchical Data Augmentation (HDA).
B. Network architecture: ReSpecNN
For training with augmented data generated by HDA, we propose a deep neural network architecture tailored for spectrometer signal reconstruction, hereinafter referred to as ReSpecNN. The architecture is visualized in Fig. 5 and explained in detail below.
The ReSpecNN model is specifically tailored for our HDA scheme. In contrast to the previous model by Kim et al.,15 we incorporate an additional fully connected block with skip connections to enhance the adaptability of our model in handling HDA-perturbed data. Instead of directly absorbing the information from the measured response matrix R into the calculations, which could be misleading due to inaccuracies in R measurements, we aim for the extra fully connected block to potentially learn a more robust inverse of R through our HDA-perturbed data. This, in turn, enhances the overall model’s robustness, leading to improved reconstruction performance. Specifically, it is important to note that the previous model by Kim et al.15 initialized input data with information directly from measured R, which might contain noise or misleading information. In comparison, through HDA, our ReSpecNN autonomously learns a more robust R for training.
Our model consists of two fully connected modules and a 1D convolutional module. The first fully connected module (rec_fc) aims to construct each spectrum at a gross level. To avoid the possible overfitting at this stage, dropout layers are incorporated after each fully connected layer. This fully connected module is followed by a convolutional module with three 1D convolutional layers (Conv1d in PyTorch), each followed by a max-pooling layer and a ReLU activation, serving to extract the potential spatial features from each wavelength value.
Subsequently, another fully connected module (rf_fc), consisting of fully connected layers and dropout layers mirroring the rec_fc module, is employed for the detailed reconstruction of finer spectral information. Furthermore, a residual connection links the initial output from rec_fc with the detailed output from rf_fc to improve the quality of the final prediction without losing the key spectral features from the initial reconstruction. A sigmoid function is applied at the end to ensure the final output spectrum is smooth and continuous.
IV. RESULTS
In this section, we illustrate the performance of our Sim2Real approach on real-world data by comparing both the test accuracy and the inference time with a state-of-the-art optimization-based method: NNLS with Total Variation (NNLS-TV) regularization.13
A. Experimental setup
In the simulated data generation process, a batch size of 256 is utilized. During the hierarchical data augmentation process, we set S = 2 and T = 4. For the noise control parameters, we chose α = 5 × 10−2 and σɛ = 1 × 10−5. In practice, both S and T could be determined empirically. For instance, we trained one model for each S = 0, 1, 2, 3 until convergence and obtained corresponding MAEs of 16.1, 1.96, 1.23, and 1.94. Consequently, we selected S = 2. It is worth noting that as S increases, the size of the training data also increases. Therefore, for more efficient training, our model favors smaller values of S. The noise levels α and θɛ are determined similarly, either by sweeping between values or by employing a binary search method within an interval to find the appropriate value. During the training stage, we employed the MSELoss function in PyTorch as our training loss. The optimizer used was Adam with a learning rate of 3 × 10−4.
B. Real-world data evaluation
To demonstrate the performance of our Sim2Real framework, we conducted evaluations using a real-world dataset, which is collected with a portable spectrometer device.13 This real-world dataset includes nine single-peak and five multiple-peak spectral signal samples, covering the wavelength range from 400 to 650 nm needed for the spectrometer’s targeted mobile application. The diverse peak profiles and the wide wavelength span ensure that the dataset is well-suited for testing the model under realistic conditions.
Figure 6 presents two spectrum reconstruction examples using our proposed deep neural network and the NNLS-TV13 approach. When compared with the ground truths obtained by a commercial spectrometer, our approach exhibits a reliable performance on our miniature chip-scale spectrometer.
For peak position predictions, Fig. 7 shows that our approach yields a result comparable with NNLS-TV13 on the aforementioned real-world dataset.13 In Fig. 7, we calculated the Mean Absolute Error (MAE), defined as the average of the absolute values of the relative peak position error across all the spectral data samples. While offering a significantly faster inference speed (details to be discussed in Sec. IV C), our model can still maintain the relative errors within −5%–5%.
Regarding the results for relative peak intensity, which represent a more difficult challenge as illustrated in Fig. 8, our method effectively constrained the maximum relative error in intensity to under 50%. In contrast, the NNLS-TV approach occasionally exhibits significant prediction errors on certain spectral data samples.
C. Execution time analysis for reconstruction methods
Our miniature chip-scale spectrometers have been integrated into wearable health monitoring devices. In such scenarios, the inference time for reconstruction often emerges as a critical factor for achieving fast real-time health monitoring. Additionally, our pre-trained Sim2Real method offers the potential to conserve battery life, as it requires only one forward pass for inference, unlike NNLS-TV, which requires sophisticated software or smart programs to iteratively solve it. Overall, prioritizing fast inference time is advantageous for real-time monitoring and extended battery life because of the low computational/energy costs involved in inference.
To demonstrate our superior inference time, we compared the execution time of our pre-trained model with the NNLS-TV solver using the real dataset.13 The results in Table I show that our model significantly reduces inference time. When conducting the execution time experiments in Table I, we ensured fairness by utilizing only the CPU for both the NNLS-TV and our DL approach. Moreover, tests were performed on identical hardware setups to guarantee that the comparison solely reflects the computational efficiency of each method.
D. Domain shift in reconstructive spectroscopy
As previously discussed in Sec. II D, the domain shift usually exists between the device-informed simulated data and the real-world data . Many works only focus on improving the reconstruction accuracy within the datasets of the same distribution (i.e., training and testing exclusively on either simulated data or specific real-measured datasets) without attempting to bridge the domain gap between them. For instance, ResCNN15 has been demonstrated to achieve excellent spectral reconstruction performance and maintain stable results even under some noisy conditions. It preprocessed the input data from y to (R† is the pseudoinverse of R) for improved results. However, ResCNN trained only on simulated data does not perform optimally when directly applied to real-world data.13
Figure 3 illustrates a comparison of the root mean square error (RMSE) under the Sim2Real setting between the spectral signals reconstructed by ResCNN and our ReSpecNN against the ground truths for both the device-informed simulated data and the real data collected from our spectrometer.13 While ResCNN nearly perfectly fits the simulated data (with a lower RMSE), its performance on the real dataset13 tends to drop significantly under the Sim2Real training setting, which confirms the domain shift phenomenon.
V. CONCLUSION AND DISCUSSION
In this paper, we introduce a novel Sim2Real framework for spectral signal reconstruction in spectroscopy, focusing on the sampling efficiency and inference time. Throughout the training process, only a single measurement of the response matrix R for the spectrometer device is required, with all other training data being simulated and generated. To address the domain shift between the real-world data and the device-informed simulated data, our Sim2Real framework introduces the hierarchical data augmentation approach to train our deep learning model. Furthermore, our neural network, ReSpecNN, which is trained exclusively on such augmented simulated data, is specifically designed for the reconstruction of real spectral signals. In our experiments, even with the simplest Gaussian noise augmentation, our Sim2Real method has achieved a ten-fold speed-up during the inference compared to the state-of-the-art optimization-based solver NNLS-TV,13 while demonstrating on par performance in terms of the solution quality. Although using only Gaussian noises for augmentation presents a limitation, the flexibility of our data augmentation suggests that improvements on the perturbation model of the response matrix R may further improve our Sim2Real framework by experimentally determining the noise distribution.
In short, our hierarchical data augmentation strategy significantly improves the model robustness on spectral signal reconstruction in spectroscopy by realistically simulating the noise that can occur within the spectrometer device. However, it does have limitations. For example, in cases of extreme outliers that arise in some of the real spectral data, even after extensive data augmentation (corresponding to the high noise level in our method), those spectral data may still not be represented adequately. This leads to suboptimal performance in spectrum reconstruction for these extreme cases. Nevertheless, this limitation is a common challenge across data augmentation techniques.
Moving forward, to better handle outliers and further improve the robustness and accuracy of our model, we plan to apply the following approaches. First, it would be interesting to explore some improved noise patterns within our data augmentation process, for instance, incorporating the idea of adversarial training of deep networks.35 Furthermore, while we focus on scenarios without labeled real training data, limited labeled data may be available in practice. These data can fine-tune our model, but the small size risks overfitting. To mitigate this while employing these data, we can use simulated training data for regularization during fine-tuning, as suggested in the recent study.36
Regarding the model scalability, the input size in the spectral reconstruction problem depends on the number of spectral encoders used within the spectrometer device, which varies based on specific problem setup and device design. In this paper, the number of spectral encoders is fixed at 16. Looking ahead, we could still explore our model scalability under potentially different problem setups, for instance, the reconstruction with polarized spectral encoded signals.
ACKNOWLEDGMENTS
J.C., P.L., Q.Q., and Y.W. acknowledge support from NSF CAREER CCF Grant No. 2143904, NSF CCF Grant No. 2212066, MIDAS Propelling Original Data Science (PODS) Grant, and a gift grant from KLA. P.-C.K. acknowledges the support from NSF ECCS Grant No. 2317047 for device fabrication and spectral data collection. Y.W. also acknowledges support from the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
J.C. and P.L. contributed equally to the work.
Jiyi Chen: Conceptualization (equal); Data curation (equal); Investigation (equal); Methodology (equal); Software (equal); Validation (equal); Visualization (equal); Writing – original draft (equal). Pengyu Li: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Visualization (equal); Writing – original draft (equal). Yutong Wang: Visualization (supporting); Writing – review & editing (supporting). Pei-Cheng Ku: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Supervision (lead); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (supporting). Qing Qu: Supervision (lead); Writing – review & editing (supporting).
DATA AVAILABILITY
The testing spectral data and our experimental results are available at https://github.com/j1goblue/Rec_Spectrometer.