Deep learning-based methods in computational microscopy have been shown to be powerful but, in general, face some challenges due to limited generalization to new types of samples and requirements for large and diverse training data. Here, we demonstrate a few-shot transfer learning method that helps a holographic image reconstruction deep neural network rapidly generalize to new types of samples using small datasets. We pre-trained a convolutional recurrent neural network on a dataset with three different types of samples and ∼2000 unique sample field-of-views, which serves as the backbone model. By fixing the trainable parameters of the recurrent blocks and transferring the rest of the convolutional blocks of the pre-trained model, we reduced the number of trainable parameters by ∼90% compared with standard transfer learning, while achieving equivalent generalization. We validated the effectiveness of this approach by successfully generalizing to new types of samples only using 80 unique field-of-views for training, and achieved (i) ∼2.5-fold convergence speed acceleration, (ii) ∼20% computation time reduction per epoch, and (iii) improved generalization to new sample types over baseline network models trained from scratch. This few-shot transfer learning approach can potentially be applied in other microscopic imaging methods, helping to generalize to new types of samples without the need for extensive training time and data.

The past decade has witnessed rapid advancements stemming from the combination of deep learning and computational imaging fields. For example, deep learning-enabled computational microscopy has been demonstrated on a wide spectrum of imaging tasks, including microscopic image super-resolution,1–3 cross-modality imaging,4–7 volumetric imaging,8–10 among others.11–15 As another example, holographic imaging and phase retrieval algorithms have also significantly benefitted from deep learning-based methods to reconstruct complex optical fields from intensity-only measurements, providing improved reconstruction accuracy, speed, and extended depth-of-field.16–28 

However, applying these deep learning-based image reconstruction methods also has some challenges, including e.g., generalization of trained models to new datasets from new types of samples. Due to the limited scale and diversity of available training data, and potential distribution shifts in acquired images, resulting from e.g., varying sample preparation and imaging protocols, a trained neural network model can face inference errors, failing to generalize to new, unknown types of samples. Transfer learning29 and domain adaptation30 are the two popular few-shot learning methods to help generalize network models on limited labeled data. The first technique fine-tunes a portion or all of the parameters of a pre-trained model on a small dataset with the new distribution, and the second one aims to generalize models on a known target domain during the training without using labels of the target domain. For this, the pre-trained model demands both a high-capacity network that can adapt well to different domains, and a large, diverse training set that represents the distribution variations.

Here, we demonstrate a few-shot transfer learning method for holographic image reconstruction using a recurrent neural network (RNN) that achieves successful generalization to new sample types, never seen during the training. To demonstrate this approach, we pre-trained a convolutional RNN on a large holographic dataset composed of various types of samples (blood smears, Pap smears, and lung tissue sections), which served as our backbone model. We show that this backbone model can be rapidly transferred to small training sets of new sample types (e.g., prostate and salivary gland tissue sections—never seen/used before), converging ∼2.5× faster compared with baseline models trained from scratch, also saving ∼20% training time per epoch using ∼90% less number of trainable parameters by fixing the RNN blocks (backbone) in the model, i.e., not updating these parameters in the transfer learning. The main contributions of this work include (1) building a pre-trained holographic image reconstruction model that is suitable for fast few-shot learning on new types of samples, and (2) demonstrating a rapid transfer learning scheme that reduces the training time and the number of trainable parameters by fixing the RNN blocks in the model. We successfully transferred the backbone RNN model to small scale prostate and salivary gland datasets that were never used during the training phase and achieved microscopic reconstruction of inline holograms, correctly revealing the phase and amplitude distributions of these new types of tissue samples with minimal amount of training data and time.

Experiments in this work were implemented using a lens-free in-line holographic microscope. A broadband light source (WhiteLase Micro, NKT Photonics) was used for illumination, filtered by an acousto-optic tunable filter (530 nm). A complementary metal–oxide–semiconductor (CMOS) image sensor (IMX 081, Sony) captures the raw holograms. A 3D positioning stage (MAX 606, Thorlabs, Inc.) was used to move the CMOS sensor to perform precise lateral and axial shifts. The samples of interest were directly placed between the light source and the CMOS sensor. The typical sample-to-source distance (z1) and sample-to-sensor distance (z2) used for imaging ranged from 510cm to 400600μm, respectively. All hardware was controlled by a customized LabVIEW program during the imaging process. 8 holograms were captured sequentially with an approximate axial spacing of 20 µm and 10ms integration time for each hologram.

All human samples involved in this work were deidentified and prepared from existing specimens, without a link or identifier to the patient. Human prostate and lung tissue slides were prepared by and acquired from the UCLA Translational Pathology Core Laboratory (TPCL). Pap smear slides were provided by the UCLA Department of Pathology. Blood smear slides were provided by the UCLA Department of Internal Medicine.

The raw in-line holograms were algorithmically super-resolved. To implement pixel super-resolution, 6 × 6 inline holograms were captured for each field-of-view (FOV) with subpixel shifts using a 3D positioning stage (MAX606, Thorlabs, Inc.). Relative lateral shifts were first estimated by an image correlation-based algorithm, and then 36 holograms were shifted and added to obtain a super-resolved hologram.31 The resulting super-resolved holograms were used for both multi-height phase retrieval (MH-PR) and neural network-based hologram reconstructions.

8 in-line holograms at different sample-to-sensor distances were captured to perform multi-height phase retrieval and generate the ground truth object field for each sample FOV.31–33 An autofocusing algorithm is first applied to estimate the sample-to-sensor distance for each hologram based on the edge sparsity criterion.34 Similar to other iterative phase retrieval algorithms,35 MH-PR iterates among the measurement planes and converges to the optimal estimate of the complex field of the object/sample. The first hologram with zero phase padding is propagated to the remaining hologram planes, using the angular spectrum-based wave propagation36 and the estimated sample-to-sensor distances. At the designated hologram position, the resulting field is updated using the measured hologram at the same position, where the amplitude of the resulting field is averaged with the measured hologram amplitude and the phase is kept/retained. One iteration is completed when all the measured holograms have been used in a sequence, and this iterative algorithm typically converges after 10–30 iterations. Finally, the resulting field is back-propagated to the sample plane to retrieve its phase and amplitude images.

Free-space propagation is calculated through the angular spectrum propagation approach.36 The propagation along the z axis of a complex wave U(x, yz0) at an axial position z0 to a measurement plane at z can be expressed as

Ux,y;z=FSPU,zz0=F1Ãkx,ky;z,
Ãkx,ky;z=FUx,y;z0eizz0*k2kx2ky2,kx2+ky2<k2,0,kx2+ky2k2,

where F and F1 denote 2D Fourier transform and inverse Fourier transform operations, respectively. x, y and kx, ky refer to the 2D coordinates in the spatial and frequency domains, respectively. k is the wave number of the illumination light and à stands for the spectrum of the complex wave, U.

Root mean square error (RMSE), enhanced correlation coefficient (ECC), mean absolute error (MAE) and multiscale structural similarity index (SSIM) were used to evaluate the similarity between two images during the training and testing of the presented neural networks. Denoting the two images as x and y with dimensions of H × W, these metrics are defined as follows:

RMSEx,y=1HWi=1Hj=1Wxi,jyi,j2,
ECCx,y=Rei=1Hj=1Wx*i,jyi,ji=1Hj=1Wxi,j2i=1Hj=1Wyi,j2,
MAEx,y=i=1Hj=1Wxi,jyi,j,
SSIMx,y=2μxμy+C1μx2+μy2+C1αMs=1M2σx,sσy,s+C2σx,s2+σy,s2+C2βs×σxy,s+C3σx,sσy,s+C3γs,

where i, j are indices of image pixels. μx,s and σx,s are the mean and standard deviation values of 2s−1 downsampled version of x, respectively, and σxy,s is the covariance between 2s−1 downsampled versions of x and y. The other parameters used for the SSIM calculations are empirically determined as β1=γ1=0.0448, β2=γ2=0.2856,β3=γ3=0.3001,β4=γ4=0.2363,α5=β5=γ5 =0.1333,M=5,C1=0.01L2,C2=2C3=0.03L2, where L = 255 for 8-bit gray scale images.37 ECC values are calculated on complex images using both amplitude and phase channels.

Holograms and their corresponding ground truth fields were cropped into non-overlapping 512 × 512-pixel image patches, each corresponding to a 0.2×0.2mm2 unique sample FOV. Each raw hologram was then back-propagated to the estimated sample plane using a common sample-to-sensor distance of z2̄=500µm. Thus, each image pair/set contains M input back-propagated fields and a corresponding ground truth complex field with the amplitude and phase channels calculated using MH-PR. Then, the image pair sets of each sample type were divided into training, validation, and testing sets, where the testing set was strictly captured on a different patient slide not used in training and validation sets. During the transfer learning, data augmentation techniques (flipping and rotation) were applied to expand the training set by eight times.

The holographic image reconstruction network (RH-M) follows the convolutional recurrent neural network architecture as in Refs. 8 and 28. The down- and up-sampling paths of RH-M consist of four consecutive convolutional blocks, respectively, and four RNN blocks connecting the corresponding convolutional blocks in the down- and up-sampling paths. The RNN block adapts two convolutional gated recurrent unit (GRU)38 layers and one 1 × 1 convolutional layer.

A generative adversarial network (GAN)39 framework was employed in this work. The loss function used for both training and transfer learning is a linear combination of (1) pixel-wise MAE loss: LMAE, (2) multiscale SSIM loss, LSSIM, between the network output ŷ and the ground truth image y, and (3) the adversarial loss, Ladv, given by the discriminator (D) network. Accordingly, the total loss for the generator (RH-M) can be expressed as

LG=αLMAE+βLSSIM+γLadv,

where α, β, γ are empirically set as 3, 1, 0.3 for all models, respectively. MAE and SSIM losses are defined as

LMAE=MAEy,ŷ,LSSIM=1SSIM(y,ŷ).

Squared loss terms were employed for the adversarial loss and the total discriminator loss (LD),40 

Ladv=Dŷ12,
LD=12Dŷ2+12Dy12.

Adam optimizers41 with learning rates of 10−5 and 10−6 were used for the generator and discriminator networks, respectively, in the backbone training. Decaying learning rates with initial values of 2 × 10−4 and 2 × 10−5 were applied for transfer learning on new datasets for the generator and the discriminator, respectively. Based on the convergence of the validation MAE losses, models were early stopped at 120 and 200 epochs for prostate and salivary gland samples, respectively. All models were realized in TensorFlow on a computer with an Intel Xeon W-2195 processor and four NVIDIA RTX 2080 Ti graphics processing units (GPUs).

For holographic image reconstruction, we used a convolutional RNN architecture (named RH-M28), which was demonstrated to be effective for multi-height phase retrieval and holographic image reconstruction [see Fig. 1(a)]. M = 2 intensity-only holograms recorded by a lensfree inline holographic microscope42 are first back-propagated with zero phase (i.e., without any phase retrieval) by an axial distance of z2̄=500μm, and then the resulting image sequence is fed into the network as its input (see the section titled Methods). In the RH-M network, the features in the propagated holograms are extracted by a series of convolutional blocks at different scales, and then aggregated by the RNN blocks [marked by the dash-lined box in Fig. 1(a)]. The ground truth/target sample fields (including sample phase and amplitude) were created by the MH-PR algorithm using eight separate holograms captured at different sample-to-sensor distances.31 Here, we pre-trained an RH-M model using a large holographic image dataset including ∼2000 unique sample FOVs from three types of samples: blood smears, Pap smears, and lung tissue sections. On average, each sample type has N ≅ 700 non-overlapping, unique FOVs and M = 5 input holograms were used for reconstruction. However, standard RH-M networks suffer from limited generalization and fail on the reconstruction of entirely new types of samples that were never seen by the network before; see, for example, the blind testing results of prostate tissue sections in Fig. 1(a).

FIG. 1.

The pre-trained RNN backbone model and few-shot transfer learning for holographic image reconstruction. (a) RH-M backbone network pre-trained from scratch on a large dataset (composed of N = 660 FOVs for each sample type) with three types of samples (blood smears, Pap smears, and lung tissue sections). The network is later directly tested on a new type of sample (prostate tissue section) but fails in its reconstruction since this type of sample was never seen by the network before. (b) Transfer learning of an RH-M network from the pre-trained model with fixed RNN backbone. A small dataset of the new sample type with Nt = 80 image FOVs is used for a transfer learning. After a fast transfer learning process, the RH-M can generalize on testing slides of the new sample type very well. Scale bar: 50 µm.

FIG. 1.

The pre-trained RNN backbone model and few-shot transfer learning for holographic image reconstruction. (a) RH-M backbone network pre-trained from scratch on a large dataset (composed of N = 660 FOVs for each sample type) with three types of samples (blood smears, Pap smears, and lung tissue sections). The network is later directly tested on a new type of sample (prostate tissue section) but fails in its reconstruction since this type of sample was never seen by the network before. (b) Transfer learning of an RH-M network from the pre-trained model with fixed RNN backbone. A small dataset of the new sample type with Nt = 80 image FOVs is used for a transfer learning. After a fast transfer learning process, the RH-M can generalize on testing slides of the new sample type very well. Scale bar: 50 µm.

Close modal

Inspired by the fact that the axial differences in hologram intensity patterns reflect/encode the sample’s phase information,43 we fixed the RNN blocks (backbone) of the pre-trained RH-M model, which reflect the differences between input holograms and merge their features, and then transferred the rest of the model to new type of samples as shown in Fig. 1(b). After a rapid transfer learning process using a small dataset consisting of Nt = 80 FOVs of the new sample type, the resulting new model successfully adapts to the new data distribution of prostate tissue sections and generalizes very well to successfully reconstruct both the phase and amplitude information of the sample [see Figs. 1(b) and 2].

FIG. 2.

Transfer learning results of RH-M backbone on prostate tissue sections. Models 1 and 2 were transferred from the pre-trained RH-M model with and without fixed RNN backbone, respectively. Baseline models were trained from scratch. (a) Back-propagated input holograms Ui and ground truth complex field obtained by MH-PR using eight input holograms. (b) Reconstruction results of models 1, 2 and the baseline model transferred/trained on small training datasets with four different scales (i.e., different Nt/N ratios). The numbers in parentheses are the number of trainable parameters for each model. All the models were trained for 120 epochs. (c) The training time of RH-M models with increasing training dataset ratio (Nt/N). (d) Amplitude RMSE of the reconstructed complex fields by RH-M models with respect to the ground truth fields. The mean and the standard deviation of RMSE were calculated over a testing set with 49 unique FOVs. (e) Training and validation loss curves of transferred models and the baseline model on a small dataset with Nt/N = 12.1%.

FIG. 2.

Transfer learning results of RH-M backbone on prostate tissue sections. Models 1 and 2 were transferred from the pre-trained RH-M model with and without fixed RNN backbone, respectively. Baseline models were trained from scratch. (a) Back-propagated input holograms Ui and ground truth complex field obtained by MH-PR using eight input holograms. (b) Reconstruction results of models 1, 2 and the baseline model transferred/trained on small training datasets with four different scales (i.e., different Nt/N ratios). The numbers in parentheses are the number of trainable parameters for each model. All the models were trained for 120 epochs. (c) The training time of RH-M models with increasing training dataset ratio (Nt/N). (d) Amplitude RMSE of the reconstructed complex fields by RH-M models with respect to the ground truth fields. The mean and the standard deviation of RMSE were calculated over a testing set with 49 unique FOVs. (e) Training and validation loss curves of transferred models and the baseline model on a small dataset with Nt/N = 12.1%.

Close modal

Major advantages of our approach resulting from the generalization of a pre-trained model in transfer learning include faster convergence speed and better adaptability to small training datasets of new sample types. To better quantify these advantages, we evaluated the transfer learning performance of the backbone model using a series of datasets with different Nt/N ratios. As illustrated in Fig. 2, the pre-trained model was transferred onto prostate datasets using different amounts of unique sample FOVs, i.e., Nt. We created models using the reported transfer learning scheme with a fixed backbone (model 1) and standard transfer learning (without fixed backbone, model 2) on four training datasets of various Nt/N ratios; each input sequence contained Mt = 2 back-propagated holograms during the transfer. We additionally trained the same network from scratch (baseline model) on the same four datasets with different Nt/N for comparison. After convergence (about 120 epochs), we tested them on an additional testing dataset of a prostate tissue excluded from all training sets. As indicated in Fig. 2(b), models 1 and 2 showed equivalent reconstruction performance compared with the baseline model on all four datasets with different Nt/N. Increasing the ratio of the training dataset benefited the reconstruction quality as further indicated by the decreasing amplitude RMSE and the increasing ECC values (see the “Methods” section). Figure 2(d) further confirmed the same conclusion by calculating the amplitude RMSE of the model outputs on the testing set of 49 unique FOVs. For example, as Nt/N increases from 12.1% to 24.3%, all three models have better generalization on the testing set, illustrating the need for larger and more diverse training datasets.

Furthermore, as noted in Fig. 2(b), the reported approach (model 1) used only ∼4.33 × 106 trainable parameters, compared to >36 × 106 for the other models, saving ∼90% of the trainable parameters. Another advantage of the reported approach is the reduced time cost of transfer learning. As reported in Fig. 2(c), during the training phase our approach (model 1, blue bars) was up to 19% faster on the same GPU machine (see “Methods”) compared to standard transfer learning (model 2, orange bars) and the baseline model (green bars). Figure 2(e) further compared the training and validation MAE values for the transferred models and the baseline model on a small prostate tissue section dataset (with Nt/N = 12.1%). The transferred models (model 1 and 2, blue and orange curves, respectively) both converged ∼2.5× faster than the baseline model (green curves), saving about 60% of the training epochs to reach the same performance in terms of the validation MAE loss. In other words, the faster convergence of the transferred models helps them adapt to smaller datasets of the new sample type much faster than the baseline model. As illustrated in Fig. 2(d), after 60 training epochs (before convergence), the transferred models achieved a similar image reconstruction performance (e.g., amplitude RMSE) as the converged models after 120 training epochs. In contrast, the baseline model after the same 60 training epochs was far from convergence and achieved a much worse amplitude RMSE value. A similar convergence performance was observed for the other three Nt/N ratios in our experiments.

Next, we evaluated the generalization of the backbone model to different input sequence lengths Mt, using an additional, new sample type. The training set was captured on a few salivary gland tissue sections (Nt/N = 24.8%), and the backbone model was transferred to datasets with Mt = 2, 3, 4, 5 holograms, respectively, with and without a fixed backbone. In addition, baseline models with Mt = 2, 3, 4, 5 were also trained from scratch (using the same amount of data) for comparison purposes. Then, the models were blindly tested using another testing set [Fig. 3(a)]. As shown in Fig. 3(b), transferred models (models 1, 2) successfully reconstructed the sample complex field with high fidelity, reflected by the low amplitude RMSE and high ECC. Furthermore, by adding extra input holograms, i.e., increasing Mt, the reconstruction accuracy can be further enhanced. In contrast, the output images of the baseline models (labeled green) are severely contaminated by artifacts (due to limited amount of training data available for the new sample type), scoring worse amplitude RMSE and ECC values than those achieved by our transferred models. Figure 3(c) further illustrates the amplitude RMSE values of these models’ outputs on an external testing set with 40 input—ground truth pairs. For all Mt values, the transferred models (blue and orange bars) significantly outperform the baseline models (green bars). Both Figs. 3(b) and 3(c) indicate that the reconstruction accuracy of the transferred models can be further improved by increasing Mt, confirming the effectiveness of the pre-trained RNN backbone in multi-height holographic image reconstruction. On the other hand, baseline models trained from scratch failed to learn using small training sets and cannot efficiently utilize additional holograms, resulting in a flat trend of the amplitude RMSE values [green bars in Fig. 3(c)]. As more input holograms are fed into the transferred models, the presented approach with a fixed backbone (model 1, blue bars) exhibited slightly inferior reconstruction performance than the standard transfer learning model (model 2, orange bars) due to the increasing complexity of the task and its limited number of trainable parameters. Nevertheless, the presented approach still has better generalization to the new sample data compared with the baseline model (model 3, green bars) for all Mt = 2, 3, 4, 5. Figure 3(d) further compares the training time of the transferred models and the baseline model with respect to Mt, demonstrating the fast generalization of the backbone model.

FIG. 3.

Transfer learning results of RH-M backbone on salivary gland tissue sections. Models 1 and 2 were transferred from the pre-trained RH-M model with and without fixed RNN backbone, respectively. Baseline model was trained from scratch. (a) Back-propagated input holograms Ui and the ground truth field obtained by MH-PR from eight input holograms. (b) Reconstruction results of models 1, 2 and the baseline model transferred/trained on small training datasets with different numbers of input holograms (Mt = 2, 3, 4, 5). (c) Amplitude RMSE of the reconstructed fields by RH-M models with respect to the ground truth fields, averaged over 40 unique testing input—ground truth pairs; (d) Training time of the transferred models and the baseline model on datasets with different Mt.

FIG. 3.

Transfer learning results of RH-M backbone on salivary gland tissue sections. Models 1 and 2 were transferred from the pre-trained RH-M model with and without fixed RNN backbone, respectively. Baseline model was trained from scratch. (a) Back-propagated input holograms Ui and the ground truth field obtained by MH-PR from eight input holograms. (b) Reconstruction results of models 1, 2 and the baseline model transferred/trained on small training datasets with different numbers of input holograms (Mt = 2, 3, 4, 5). (c) Amplitude RMSE of the reconstructed fields by RH-M models with respect to the ground truth fields, averaged over 40 unique testing input—ground truth pairs; (d) Training time of the transferred models and the baseline model on datasets with different Mt.

Close modal

In this work, in order to improve deep neural networks’ generalization to successfully image new sample types in computational holographic microscopy, we presented a transfer learning-based few-shot learning method to generalize models when only small datasets of new sample types are available; we demonstrated the success of this new approach using a convolutional RNN to perform multi-height holographic image reconstruction. We established an RNN backbone model and a transfer learning scheme to reduce both the computation time and the number of trainable parameters compared to standard transfer learning approaches. The generalization of the backbone model was validated on prostate and salivary gland tissue datasets that were never seen by the network before. Compared with baseline models trained from scratch, the RNN models transferred from the backbone gained faster convergence speed and improved generalization to small datasets of new sample types. The reported transfer learning framework substantially enhances the generalization of deep neural network-based holographic imaging for new types of samples.

The authors have no conflicts to disclose.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
W.
Ouyang
,
A.
Aristov
,
M.
Lelek
,
X.
Hao
, and
C.
Zimmer
,
Nat. Biotechnol.
36
,
460
(
2018
).
2.
H.
Wang
,
Y.
Rivenson
,
Y.
Jin
,
Z.
Wei
,
R.
Gao
,
H.
Günaydın
,
L. A.
Bentolila
,
C.
Kural
, and
A.
Ozcan
,
Nat. Methods
16
,
103
(
2019
).
3.
C.
Qiao
,
D.
Li
,
Y.
Guo
,
C.
Liu
,
T.
Jiang
,
Q.
Dai
, and
D.
Li
,
Nat. Methods
18
,
194
(
2021
).
4.
Y.
Rivenson
,
H.
Wang
,
Z.
Wei
,
K.
de Haan
,
Y.
Zhang
,
Y.
Wu
,
H.
Günaydın
,
J. E.
Zuckerman
,
T.
Chong
,
A. E.
Sisk
,
L. M.
Westbrook
,
W. D.
Wallace
, and
A.
Ozcan
,
Nat. Biomed. Eng.
3
,
466
(
2019
).
5.
E. M.
Christiansen
,
S. J.
Yang
,
D. M.
Ando
,
A.
Javaherian
,
G.
Skibinski
,
S.
Lipnick
,
E.
Mount
,
A.
O’Neil
,
K.
Shah
,
A. K.
Lee
,
P.
Goyal
,
W.
Fedus
,
R.
Poplin
,
A.
Esteva
,
M.
Berndl
,
L. L.
Rubin
,
P.
Nelson
, and
S.
Finkbeiner
,
Cell
173
,
792
(
2018
).
6.
K.
de Haan
,
Y.
Zhang
,
J. E.
Zuckerman
,
T.
Liu
,
A. E.
Sisk
,
M. F. P.
Diaz
,
K.-Y.
Jen
,
A.
Nobori
,
S.
Liou
,
S.
Zhang
,
R.
Riahi
,
Y.
Rivenson
,
W. D.
Wallace
, and
A.
Ozcan
,
Nat. Commun.
12
,
4884
(
2021
).
7.
S.
Cheng
,
S.
Fu
,
Y. M.
Kim
,
W.
Song
,
Y.
Li
,
Y.
Xue
,
J.
Yi
, and
L.
Tian
,
Sci. Adv.
7
,
eabe0431
(
2021
).
8.
L.
Huang
,
H.
Chen
,
Y.
Luo
,
Y.
Rivenson
, and
A.
Ozcan
,
Light: Sci. Appl.
10
,
62
(
2021
).
9.
E.
Nehme
,
D.
Freedman
,
R.
Gordon
,
B.
Ferdman
,
L. E.
Weiss
,
O.
Alalouf
,
T.
Naor
,
R.
Orange
,
T.
Michaeli
, and
Y.
Shechtman
,
Nat. Methods
17
,
734
(
2020
).
10.
X.
Yang
,
L.
Huang
,
Y.
Luo
,
Y.
Wu
,
H.
Wang
,
Y.
Rivenson
, and
A.
Ozcan
,
ACS Photonics
8
,
2174
(
2021
).
11.
T.
Nguyen
,
Y.
Xue
,
Y.
Li
,
L.
Tian
, and
G.
Nehmetallah
,
Opt. Express
26
,
26470
(
2018
).
12.
G.
Barbastathis
,
A.
Ozcan
, and
G.
Situ
,
Optica
6
,
921
(
2019
).
13.
K.
de Haan
,
Y.
Rivenson
,
Y.
Wu
, and
A.
Ozcan
,
Proc. IEEE
108
,
30
(
2020
).
14.
H.
Pinkard
,
Z.
Phillips
,
A.
Babakhani
,
D. A.
Fletcher
, and
L.
Waller
,
Optica
6
,
794
(
2019
).
15.
E.
Bostan
,
R.
Heckel
,
M.
Chen
,
M.
Kellman
, and
L.
Waller
,
Optica
7
,
559
(
2020
).
16.
A.
Goy
,
K.
Arthur
,
S.
Li
, and
G.
Barbastathis
,
Phys. Rev. Lett.
121
,
243902
(
2018
).
17.
H.
Wang
,
M.
Lyu
, and
G.
Situ
,
Opt. Express
26
,
22603
(
2018
).
18.
Y.
Park
,
C.
Depeursinge
, and
G.
Popescu
,
Nat. Photonics
12
,
578
(
2018
).
19.
Y.
Jo
,
H.
Cho
,
S. Y.
Lee
,
G.
Choi
,
G.
Kim
,
H.-S.
Min
, and
Y.
Park
,
IEEE J. Sel. Top. Quantum Electron.
25
,
6800914
(
2019
).
20.
Z.
Ren
,
Z.
Xu
, and
E. Y.
Lam
,
Adv. Photonics
1
,
016004
(
2019
).
21.
K.
Wang
,
J.
Dou
,
Q.
Kemao
,
J.
Di
, and
J.
Zhao
,
Opt. Lett.
44
,
4765
(
2019
).
22.
D.
Yin
,
Z.
Gu
,
Y.
Zhang
,
F.
Gu
,
S.
Nie
,
J.
Ma
, and
C.
Yuan
,
IEEE Photonics J.
12
,
3900312
(
2020
).
23.
M.
Deng
,
S.
Li
,
A.
Goy
,
I.
Kang
, and
G.
Barbastathis
,
Light: Sci. Appl.
9
,
36
(
2020
).
24.
Y.
Rivenson
,
Y.
Zhang
,
H.
Günaydın
,
D.
Teng
, and
A.
Ozcan
,
Light: Sci. Appl.
7
,
17141
(
2018
).
25.
Y.
Wu
,
Y.
Rivenson
,
Y.
Zhang
,
Z.
Wei
,
H.
Günaydin
,
X.
Lin
, and
A.
Ozcan
,
Optica
5
,
704
(
2018
).
26.
Y.
Wu
,
Y.
Luo
,
G.
Chaudhari
,
Y.
Rivenson
,
A.
Calis
,
K.
de Haan
, and
A.
Ozcan
,
Light: Sci. Appl.
8
,
25
(
2019
).
27.
Y.
Rivenson
,
Y.
Wu
, and
A.
Ozcan
,
Light: Sci. Appl.
8
,
85
(
2019
).
28.
L.
Huang
,
T.
Liu
,
X.
Yang
,
Y.
Luo
,
Y.
Rivenson
, and
A.
Ozcan
,
ACS Photonics
8
,
1763
(
2021
).
29.
S. J.
Pan
and
Q.
Yang
,
IEEE Trans. Knowl. Data Eng.
22
,
1345
(
2010
).
30.
E.
Tzeng
,
J.
Hoffman
,
K.
Saenko
, and
T.
Darrell
, in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
(
IEEE
,
2017
), pp.
7167
1766
.
31.
A.
Greenbaum
and
A.
Ozcan
,
Opt. Express
20
,
3129
(
2012
).
32.
A.
Greenbaum
,
U.
Sikora
, and
A.
Ozcan
,
Lab Chip
12
,
1242
(
2012
).
33.
Y.
Rivenson
,
Y.
Wu
,
H.
Wang
,
Y.
Zhang
,
A.
Feizi
, and
A.
Ozcan
,
Sci. Rep.
6
,
37862
(
2016
).
34.
Y.
Zhang
,
H.
Wang
,
Y.
Wu
,
M.
Tamamitsu
, and
A.
Ozcan
,
Opt. Lett.
42
,
3824
(
2017
).
35.
J. R.
Fienup
,
Appl. Opt.
21
,
2758
(
1982
).
36.
J. W.
Goodman
,
Introduction to Fourier Optics
, 3rd ed. (
Roberts & Co
,
Englewood, CO
,
2005
).
37.
Z.
Wang
,
E. P.
Simoncelli
, and
A. C.
Bovik
, in
The Thrity-Seventh Asilomar Conference on Signals, Systems and Computers
(
IEEE
,
Pacific Grove, CA
,
2003
), pp.
1398
1402
.
38.
K.
Cho
,
B.
van Merriënboer
,
C.
Gulcehre
,
D.
Bahdanau
,
F.
Bougares
,
H.
Schwenk
, and
Y.
Bengio
, “
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
,” in
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
(
Association for Computational Linguistics
,
2014
), pp.
1724
1734
.
39.
I.
Goodfellow
,
J.
Pouget-Abadie
,
M.
Mirza
,
B.
Xu
,
D.
Warde-Farley
,
S.
Ozair
,
A.
Courville
, and
Y.
Bengio
, “
Generative adversarial nets
,”
Adv. Neural Inf. Process Syst.
9
,
27
(
2014
).
40.
X.
Mao
,
Q.
Li
,
H.
Xie
,
R. Y. K.
Lau
,
Z.
Wang
, and
S. P.
Smolley
, “
Least Squares Generative AdversarialNetworks
,” in
2017 IEEE International Conference on Computer Vision (ICCV)
(
IEEE
,
2017
), pp.
2813
2821
.
41.
D.
Kingma
and
J.
Ba
, “
Adam: A Method for Stochastic Optimization
,” in
Proceedings of the 3rd International Conference on Learning Representations (ICLR)
.
42.
S. S.
Kou
,
L.
Waller
,
G.
Barbastathis
, and
C. J. R.
Sheppard
,
Opt. Lett.
35
,
447
(
2010
).
43.
A.
Greenbaum
,
W.
Luo
,
T.-W.
Su
,
Z.
Göröcs
,
L.
Xue
,
S. O.
Isikman
,
A. F.
Coskun
,
O.
Mudanyali
, and
A.
Ozcan
,
Nat. Methods
9
,
889
(
2012
).