We report a dual-modality ghost diffraction (GD) system to simultaneously enable high-fidelity data transmission and high-resolution object reconstruction through complex disordered media using an untrained neural network (UNN) with only one set of realizations. The pixels of a 2D image to be transmitted are sequentially encoded into a series of random amplitude-only patterns using a UNN without labels and datasets. The series of random patterns generated is sequentially displayed to interact with an object placed in a designed optical system through complex disordered media. The realizations recorded at the receiving end are used to retrieve the transmitted data and reconstruct the object at the same time. The experimental results demonstrate that the proposed dual-modality GD system can robustly enable high-fidelity data transmission and high-resolution object reconstruction in a complex disordered environment. This could be a promising step toward the development of AI-driven compact optical systems with multiple modalities through complex disordered media.

Optical modulation through disordered media has become an active research topic1–5 and has various applications in biomedical and astronomy fields.6,7 The main challenge is that the disordered media are inhomogeneous and variable and corrupt effective information in the wave propagation path. Some approaches to addressing this challenge have been developed,1,4,8–10 e.g., phase conjugation,8 memory effect,9 and shower-curtain effect.10 In previous studies, pixelated arrays were usually used for intensity detection, but it could be difficult (or even not available) to implement in some applications, e.g., non-visible bandwidths or low light levels.

Recently, ghost diffraction (GD) with structured illumination and a single-pixel detector11–19 has emerged as an easy-to-implement approach. The GD was initially realized with entangled photons generated by spontaneous parametric downconversion in quantum domain.11,12 Subsequently, experiments of GD with pseudo-thermal light were realized, which promoted its development in classical domain.13,14 Optical information can be retrieved based on the second-order correlation between a series of illumination patterns and the realizations collected by using a single-pixel detector.15 Advanced algorithms, e.g., differential,20 normalized,21 and compressed sensing,22,23 have been developed to enhance the signal-to-noise ratio of ghost images. Furthermore, deep learning24–26 has been applied at low samplings and could perform properly. However, training-based deep learning requires a large dataset for optimization and could lack the generalization capability. To get rid of dataset constraints, an untrained neural network (UNN)27 was introduced28,29 and the GD could have a comparable performance by incorporating a physical model into neural networks. Although the GD is promising in real-world scenarios30–36 (e.g., microscopy33 and communication34), the current studies focus on developing one modality, and it is difficult to integrate dual or multiple modalities into one optical system, especially in complex disordered environments. It is desirable to explore an integrated GD system to enable multiple modalities in complex disordered environments.

Here, we report a dual-modality GD system to simultaneously enable high-fidelity optical data transmission and high-resolution ghost reconstruction through complex disordered media. The UNN is first designed to sequentially encode the pixels of a 2D image (to be optically transmitted) into a series of random amplitude-only patterns, and the zero-frequency component of the spectrum is designed to be proportional to each pixel of the 2D image. The series of generated random patterns is sequentially embedded into a spatial light modulator (SLM) in a designed optical system. Optical wave modulated by the generated random patterns illuminates an object, and a single-pixel detector is used to record a series of light intensities. High-fidelity optical data retrieval can be directly realized by using the realizations, and a high-resolution object is also recovered, which is enhanced by using block-matching and 3D filtering (BM3D) and UNN, regularized by an explicit denoiser (UNN-RED). A series of optical experiments are conducted in complex disordered environments, and the experimental results verify effectiveness and robustness of the designed dual-modality GD system.

In the designed dual-modality GD system, the pixels of a transmitted 2D grayscale image with 128 × 128 pixels are first encoded into a series of random amplitude-only patterns using a UNN. The generation of random amplitude-only patterns is shown in Fig. 1(a). A uniformly distributed random input is fed into a convolutional neural network with U-net architecture,37 and an amplitude-only pattern Pm,n with 128 × 128 pixels can be generated as an output. Then, the zero-frequency component of the Fourier spectrum of the output pattern P is extracted and scaled by a magnification factor MF (e.g., 5000) in order to generate a value Hi described by
(1)
where i = 1,2, …, K, K denotes the total number of pixels to be encoded, and FT denotes Fourier transform. To enable Hi being the same as original pixel Gi of the 2D image to be transmitted, mean squared error (MSE) is employed as a loss function, described by
(2)
FIG. 1.

(a) Generation of random amplitude-only patterns using a UNN, and (b) Gaussian distribution and a differential approach. A zero-mean Gaussian image is obtained by subtracting its mean from a Gaussian map.

FIG. 1.

(a) Generation of random amplitude-only patterns using a UNN, and (b) Gaussian distribution and a differential approach. A zero-mean Gaussian image is obtained by subtracting its mean from a Gaussian map.

Close modal

After the optimization, the zero-frequency component of the Fourier spectrum of pattern Pi can be proportional to the pixel Gi. The above-mentioned process is repeated until all pixels of the 2D image are encoded into random amplitude-only patterns.

Since the probability distributions of random patterns generated by the UNN are inconsistent with the light source,38 the transmission quality could be significantly affected. Furthermore, noise could be inevitably induced by complex disordered media, preventing a direct application of the generated patterns. To overcome this challenge, a strategy is further designed, as shown in Fig. 1(b). To align probability distributions of the generated patterns with Gaussian, a zero-mean Gaussian image is used to be superimposed with each generated pattern. Therefore, probability distribution of the pattern can be modified without affecting its original zero-frequency component in a Fourier domain. A differential approach is also employed to suppress noise. Each pattern is further divided, i.e., (1 + P)/2 and (1 − P)/2. Finally, a shuffle operation is applied to produce randomized illumination patterns.

A schematic experimental setup for the proposed dual-modality GD system is shown in Fig. 2(a). A green laser (MGI-III-532-50 mW) with a wavelength of 532.0 nm and a peak output power of 50.0 mW is used. The laser beam is expanded by using an objective lens with a 40× magnification and then collimated. The collimated beam is reflected by a mirror and then illuminates the generated patterns embedded into SLM (Holoeye, LC-R720) with a pixel pitch of 20.0 μm. A 4f system is designed to project the patterns onto an object, e.g., USAF 1951 resolution target. The lenses L1 and L2 in the 4f system have a focal length of 50.0 mm. A water tank with a dimension of 100.0 mm (length), 200.0 mm (width), and 300.0 mm (height) is placed in the optical path and is filled with 4000 ml of clean water. To create a dynamic disordered environment, 15 ml skimmed milk diluted with 1000 ml clean water is kept dropping into water tank. A rotator is used to keep operating at 600.0 revolutions per minute (rpm) to create dynamic scattering. Only one set of realizations is recorded by using a single-pixel silicon photodiode (Thorlabs, PDA100A2).

FIG. 2.

(a) Schematic experimental setup for the proposed dual-modality GD system through complex disordered media. OL: objective lens; SPD: single-pixel detector; L1 and L2: lenses. (b) A schematic of the proposed dual-modality GD system. With one set of collected realizations, the transmitted data can be retrieved in Eq. (7), and the corrected realizations for imaging can be obtained in Eq. (12). For data transmission, a normalization operation is further performed, and a series of the normalized data are reshaped into a 2D image. For optical imaging, a coarse image is reconstructed in Eq. (13), followed by a two-step process.

FIG. 2.

(a) Schematic experimental setup for the proposed dual-modality GD system through complex disordered media. OL: objective lens; SPD: single-pixel detector; L1 and L2: lenses. (b) A schematic of the proposed dual-modality GD system. With one set of collected realizations, the transmitted data can be retrieved in Eq. (7), and the corrected realizations for imaging can be obtained in Eq. (12). For data transmission, a normalization operation is further performed, and a series of the normalized data are reshaped into a 2D image. For optical imaging, a coarse image is reconstructed in Eq. (13), followed by a two-step process.

Close modal
In dynamic disordered environments,39 temporally varied scaling factors lead to inaccurate data retrieval and object reconstruction. Here, a fixed reference pattern Rm,n is applied to correct the series of dynamic scaling factors. The reference pattern is used just before each generated illumination pattern. When the reference pattern and illumination patterns are alternately and sequentially displayed by SLM, the realizations can be described by
(3)
(4)
(5)
(6)
respectively, where Bir and B̃ir denote single-pixel light intensities corresponding to the reference pattern, Bi1 and Bi2 denote single-pixel light intensities corresponding to the illumination patterns (1 + Pi)/2 and (1 − Pi)/2, respectively, and γir,γi1,γ̃ir,γi2 denote scaling factors. In the developed dual-modality GD system, optical data transmission is described in Eqs. (3)(6). As shown in Fig. 1(a), each pixel to be transmitted is first encoded. When optical wave modulated by the generated pattern propagates in free space, each portion of optical wave can carry information according to the Huygens–Fresnel principle. In the optical channel, the object can be regarded as a disordered medium. Although only a portion of optical wave propagates, it still carries enough information. It can be considered that the object has no influence on optical data transmission and retrieval.
At the receiving end, data retrieval can be described by
(7)
where Bi denotes the retrieved data. Adjacent scaling factors, e.g., γir and γi1, can be assumed to be the same due to the short time interval.
In the developed dual-modality GD system, when optical imaging is considered, the same set of realizations recorded at the receiving end can be re-described by
(8)
(9)
(10)
(11)
where Om,n denotes an object and βir,βi1,β̃ir,βi2 denote scaling factors.
Therefore, the corrected realizations can be described by
(12)
Then, the object can be reconstructed by
(13)
where Ôm,n denotes a reconstructed ghost image and denotes an ensemble average over the total number of realizations. The recovered ghost image in Eq. (13) suffers from noise. To address this issue, a two-step process for quality enhancement is implemented. BM3D40 is first utilized to suppress noise in the reconstructed ghost image. Then, the filtered ghost image is fed into a designed convolutional neural network (UNN-RED41) to reconstruct a high-quality ghost image. Alternating directions method of multipliers (ADMM)42 is employed to facilitate the optimization of the parameters within the neural network and the explicit denoiser, since a direct differentiation of the explicit denoiser will lead to a failure in backpropagation. The optimization process in UNN-RED can be described by
(14)
(15)
(16)
where Uθ denotes the convolutional neural network with parameters θ,θ* denotes the parameters after optimization, λ and η denote free parameters to be chosen (e.g., 0.5), x is initialized as Ô in Eq. (13) to be updated in Eq. (15), u denotes a Lagrange multiplier vector updated in Eq. (16) and initially set to zero, j denotes the iteration index, α denotes a step size for the steepest descent and is defaulted as 1, and f denotes an explicit denoiser function, e.g., non-local means (NLM).43 The NAFNet44 with one block is adopted for Uθ. It can be found in Eq. (14) that a physical model based on ghost imaging has been integrated into UNN-RED. After optimization, a high-quality ghost image can be reconstructed, i.e., image (xu). The UNN-RED is implemented by using NIVIDIA GeForce 1080 Ti GPU, and ADAM optimizer45 with a learning rate of 0.001 is used. The aforementioned two-step enhancement process does not require any datasets or labels. A schematic of the proposed dual-modality GD system is shown in Fig. 2(b). In this study, supervised neural networks could not be very practical, since precise pixel encoding is requested and creating comprehensive datasets is difficult. Therefore, UNN is designed and applied in this study, and the usage of UNN addresses the challenge to enable data transmission and object reconstruction with its dataset-independent characteristic.

To verify the developed dual-modality GD system, different 2D grayscale images are optically transmitted in Fig. 2(a) and are individually encoded into a series of random amplitude-only patterns using the designed UNN. The generated patterns are sequentially displayed by SLM to illuminate an object (i.e., USAF 1951 resolution target) through disordered media. Figure 3 shows the experimental results in the proposed dual-modality GD system through static and dynamic disordered media, respectively. In static disordered environments, the experimentally retrieved data are shown in Figs. 3(a)3(d), and the reconstructed ghost images are shown in Figs. 3(e)3(h). Quality of the experimentally retrieved images is quantitatively evaluated by using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).46 As shown in Figs. 3(a)3(d), the retrieved data have high PSNR and high SSIM, and it is demonstrated that the proposed dual-modality GD system enables high-fidelity image transmission.

FIG. 3.

Experimental results: (a)–(d) the retrieved 2D images in static disordered media. PSNR values are 39.27, 40.25, 39.12, and 38.24 dB, and SSIM values are 0.9975, 0.9884, 0.9867, and 0.9884; (e)–(h) the reconstructed objects in static disordered media with CNR of 24.22, 24.83, 28.16, and 25.02, respectively; (i)–(l) the retrieved 2D images in dynamic disordered media. PSNR values are 39.71, 38.62, 39.09, and 40.16 dB, and SSIM values are 0.9973, 0.9810, 0.9899, and 0.9877; (m)–(p) the reconstructed objects in dynamic disordered media with CNR of 23.53, 21.09, 23.68, and 20.67, respectively. The size of the reconstructed object images is 128 × 128 pixels. For CNR calculations, signal part and background part are shown in Fig. S1 (see the supplementary material).

FIG. 3.

Experimental results: (a)–(d) the retrieved 2D images in static disordered media. PSNR values are 39.27, 40.25, 39.12, and 38.24 dB, and SSIM values are 0.9975, 0.9884, 0.9867, and 0.9884; (e)–(h) the reconstructed objects in static disordered media with CNR of 24.22, 24.83, 28.16, and 25.02, respectively; (i)–(l) the retrieved 2D images in dynamic disordered media. PSNR values are 39.71, 38.62, 39.09, and 40.16 dB, and SSIM values are 0.9973, 0.9810, 0.9899, and 0.9877; (m)–(p) the reconstructed objects in dynamic disordered media with CNR of 23.53, 21.09, 23.68, and 20.67, respectively. The size of the reconstructed object images is 128 × 128 pixels. For CNR calculations, signal part and background part are shown in Fig. S1 (see the supplementary material).

Close modal

To illustrate the quality of the retrieved images, the pixels along the 30th row of the retrieved images shown in Figs. 3(a) and 3(b) are shown in Figs. 4(a) and 4(b), respectively. It is demonstrated that the experimentally retrieved data overlap with the original data. PSNR values of Figs. 4(a) and 4(b) are 39.75 and 39.39 dB, respectively. MSE values are 1.06 × 10−4 and 1.15 × 10−4, respectively. The high PSNR and low MSE demonstrate that the proposed optical system is feasible and robust for optically transmitting 2D grayscale images. Contrast-to-noise ratio (CNR)47–49 is calculated to evaluate quality of the reconstructed ghost images. In Figs. 3(e)3(h), the reconstructed ghost images have high CNR. It is demonstrated that the proposed dual-modality GD system can reconstruct a high-quality object at the same time. Here, element 5 in Group 3 is the finest resolvable feature, and high spatial resolution of 78.74 μm is achieved. The experimental results shown in Figs. 3(a)3(h) demonstrate that the proposed dual-modality GD system can simultaneously realize high-fidelity optical data transmission and high-resolution object reconstruction in static disordered environments.

FIG. 4.

(a)–(d) Comparisons between the pixels along the 30th row of the experimentally retrieved images shown in Figs. 3(a), 3(b), 3(k), and 3(l) and those of original images. Original data refer to a row of the 2D grayscale image to be transmitted and are encoded into a series of random patterns via the UNN.

FIG. 4.

(a)–(d) Comparisons between the pixels along the 30th row of the experimentally retrieved images shown in Figs. 3(a), 3(b), 3(k), and 3(l) and those of original images. Original data refer to a row of the 2D grayscale image to be transmitted and are encoded into a series of random patterns via the UNN.

Close modal

When optical experiments in dynamic disordered environments are conducted, optical transmission and imaging results are shown in Figs. 3(i)3(p). In Figs. 3(i)3(l), the experimentally retrieved 2D images are of high fidelity. Typical comparisons using the retrieved data at the 30th row of Figs. 3(k) and 3(l) are shown in Figs. 4(c) and 4(d). PSNR of the data in Figs. 4(c) and 4(d) is 40.87 and 40.31 dB, respectively. MSE values are 8.19 × 10−5 and 9.31 × 10−5, respectively. It is experimentally demonstrated that the retrieved data are in accordance with the original data. The reconstructed ghost images are shown in Figs. 3(m)3(p), and element 5 in Group 3 is well-resolved. It is demonstrated that high spatial resolution of 78.74 μm is achieved in the proposed dual-modality GD system through dynamic disordered media. Dual modalities, i.e., high-fidelity optical data transmission and high-resolution object reconstruction, are realized in the proposed optical system through complex disordered media.

A 2D image (“butterfly”) is encoded into a series of random amplitude-only patterns, which are sequentially used to illuminate an object, and different objects (i.e., “1X,” “95,” “AF,” and “Triple-bar”) are individually tested in the optical path. Static (clean) water and dynamic (turbid) water are applied in Fig. 2(a). In the optical setup with static water, the experimental results are shown in Figs. 5(a)5(h). It is shown in Figs. 5(a)5(d) that the retrieved images are of high fidelity. PSNR values are higher than 40.0 dB, and SSIM values are close to 1. The comparisons in the 30th row of Figs. 5(a) and 5(b) are shown in Figs. 6(a) and 6(b). PSNR values are 41.00 and 43.45 dB, respectively. MSE values are 7.94 × 10−5 and 4.52 × 10−5, respectively. It is experimentally demonstrated that high-fidelity data transmission can always be realized, when different objects are placed in Fig. 2(a). The reconstructed objects are shown in Figs. 5(e)5(h). It can be seen that the recovered objects are of high quality with CNR values higher than 33.0.

FIG. 5.

Experimental results: (a)–(d) the retrieved 2D transmitted images in static disordered media. PSNR values are 41.51, 41.49, 41.21, and 40.19 dB, and SSIM values are 0.9979, 0.9982, 0.9980, and 0.9972; (e)–(h) the reconstructed objects in static disordered media with CNR of 33.22, 40.11, 36.26 and 39.71, respectively; (i)–(l) the retrieved 2D transmitted images in dynamic disordered media. PSNR values are 39.96, 39.91, 38.12, and 38.51 dB, and SSIM values are 0.9976, 0.9978, 0.9970, and 0.9969; (m)–(p) the reconstructed objects in dynamic disordered media with CNR of 30.66, 22.19, 25.45 and 25.82, respectively. For CNR calculations, the signal part and background part are shown in Fig. S1 (see the supplementary material).

FIG. 5.

Experimental results: (a)–(d) the retrieved 2D transmitted images in static disordered media. PSNR values are 41.51, 41.49, 41.21, and 40.19 dB, and SSIM values are 0.9979, 0.9982, 0.9980, and 0.9972; (e)–(h) the reconstructed objects in static disordered media with CNR of 33.22, 40.11, 36.26 and 39.71, respectively; (i)–(l) the retrieved 2D transmitted images in dynamic disordered media. PSNR values are 39.96, 39.91, 38.12, and 38.51 dB, and SSIM values are 0.9976, 0.9978, 0.9970, and 0.9969; (m)–(p) the reconstructed objects in dynamic disordered media with CNR of 30.66, 22.19, 25.45 and 25.82, respectively. For CNR calculations, the signal part and background part are shown in Fig. S1 (see the supplementary material).

Close modal
FIG. 6.

(a)–(d) Comparisons between the pixels along the 30th row of the experimentally retrieved images shown in Figs. 5(a), 5(b), 5(k), and 5(l) and those of the original image.

FIG. 6.

(a)–(d) Comparisons between the pixels along the 30th row of the experimentally retrieved images shown in Figs. 5(a), 5(b), 5(k), and 5(l) and those of the original image.

Close modal

In dynamic disordered environments, the experimental results are shown in Figs. 5(i)5(p). In Figs. 5(i)5(l), high PSNR and high SSIM are achieved. The pixels along the 30th row of the retrieved 2D images in Fig. 5(k) and 5(l) are shown in Figs. 6(c) and 6(d), respectively. PSNR values of the experimentally retrieved data in Figs. 6(c) and 6(d) are 38.70 and 40.23 dB, respectively. MSE values are 1.35 × 10−4 and 9.49 × 10−5, respectively. It is demonstrated that the retrieved data are of high fidelity in dynamic disordered environments. The reconstructed objects are shown in Figs. 5(m)5(p). The reconstructed images render detailed object information with high visibility. It is experimentally verified that the proposed dual-modality GD system has high robustness to simultaneously reconstruct a high-quality object and retrieve high-fidelity data using only one set of realizations in dynamic and complex disordered media.

The proposed optical system is further verified through disordered media at different sampling ratios. Imaging through static (clean) water and dynamic (turbid) water is conducted, and the experimental results are shown in Fig. 7 when sampling ratios of 12.2%, 24.4%, 36.6%, 48.8%, 61.0%, 73.2%, 85.4%, and 97.6% are used, respectively. The comparisons in Fig. 7 show effectiveness of the proposed two-step enhancement approach. It is shown in Figs. 7(a) and 7(b) that the reconstruction quality is dramatically enhanced with the higher sampling ratio. With the developed two-step enhancement, object images with the higher visibility can always be obtained. When the sampling ratio is not smaller than 24.4%, the reconstructed ghost images can contain clear information. When dynamic and turbid water is considered, the experimental results are shown in Figs. 7(c) and 7(d). With the developed two-step enhancement, the visibility is significantly enhanced and noise is highly suppressed, as shown in Fig. 7(d). It is observed that the proposed optical system can recover high-quality objects at low sampling rates (e.g., 24.4%) in dynamic disordered environments. In the proposed dual-modality GD system, the sampling ratio is constrained by the length of the transmitted data. The experimental results in Fig. 7 demonstrate that high-quality objects can still be reconstructed in complex disordered environments even when the length of the transmitted data is small.

FIG. 7.

Experimental results at different sampling ratios: (a) without the two-step enhancement in static environments, (b) with the two-step enhancement in static environments, (c) without the two-step enhancement in dynamic environments, and (d) with the two-step enhancement in dynamic environments.

FIG. 7.

Experimental results at different sampling ratios: (a) without the two-step enhancement in static environments, (b) with the two-step enhancement in static environments, (c) without the two-step enhancement in dynamic environments, and (d) with the two-step enhancement in dynamic environments.

Close modal

In Figs. 8(a)8(d), CNR is calculated to quantitatively illustrate the quality of the reconstructed objects at different sampling ratios. In Figs. 8(a) and 8(b), CNR values of the reconstructed object images have similar variation trends, steadily increasing with the higher sampling ratio. CNR values of the reconstructed objects with BM3D and UNN-RED, as shown in Fig. 8(b), are much higher than those without enhancement as shown in Fig. 8(a). When optical experiments in dynamic turbid water are conducted, the trends of CNR variations are similar, as shown in Figs. 8(c) and 8(d). The average CNR values of the reconstructed object images without enhancement are in a range of 0.34–1.56, and the average CNR values with the two-step enhancement increase from 5.27 to 24.17.

FIG. 8.

Variation of CNR values at different sampling ratios: (a) without the two-step enhancement in static disordered environments, (b) with the two-step enhancement in static disordered environments, (c) without the two-step enhancement in dynamic disordered environments, and (d) with the two-step enhancement in dynamic disordered environments.

FIG. 8.

Variation of CNR values at different sampling ratios: (a) without the two-step enhancement in static disordered environments, (b) with the two-step enhancement in static disordered environments, (c) without the two-step enhancement in dynamic disordered environments, and (d) with the two-step enhancement in dynamic disordered environments.

Close modal

We have reported a dual-modality GD system using a UNN, simultaneously enabling high-fidelity data transmission and high-resolution object reconstruction through complex disordered media using only one set of realizations. A series of random amplitude-only patterns are generated to carry information of a 2D grayscale image (to be optically transmitted) using a UNN. The generated random patterns are embedded into SLM to modulate optical wave, and a series of single-pixel light intensities are recorded at the receiving end. With only one set of realizations, high-fidelity data information can be retrieved, and a high-resolution and high-visibility object can be recovered. A series of optical experiments have been conducted to verify the proposed dual-modality GD system. It is demonstrated that a dual-modality GD system can be realized in a complex disordered environment. The proposed approach could open an avenue for the development of AI-driven multi-modality optical systems in complex disordered environments.

Additional information supporting the findings of this work is provided as a separate file. The supplementary material includes information about the contrast-to-noise ratio (CNR).

This work was supported by the Hong Kong Research Grants Council (Grant Nos. 15224921, 15223522), the Basic and Applied Basic Research Foundation of GuangDong Province (Grant No. 2022A1515011858), and the Hong Kong Polytechnic University (Grant No. 1-WZ4M).

The authors have no conflicts to disclose.

Yang Peng: Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Software (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Wen Chen: Conceptualization (lead); Formal analysis (equal); Funding acquisition (lead); Investigation (equal); Methodology (equal); Project administration (lead); Resources (equal); Supervision (lead); Writing – review & editing (equal).

The data and source codes that support the findings of this study are openly available in GitHub at https://github.com/YangPeng2021/Dual-modality-ghost-diffraction.

1.
S.
Yoon
,
M.
Kim
,
M.
Jang
,
Y.
Choi
,
W.
Choi
,
S.
Kang
, and
W.
Choi
, “
Deep optical imaging within complex scattering media
,”
Nat. Rev. Phys.
2
,
141
158
(
2020
).
2.
L.
Lin
,
J.
Cao
,
D.
Zhou
,
H.
Cui
, and
Q.
Hao
, “
Ghost imaging through scattering medium by utilizing scattered light
,”
Opt. Express
30
(
7
),
11243
11253
(
2022
).
3.
M.
Bashkansky
,
S. D.
Park
, and
J.
Reintjes
, “
Single pixel structured imaging through fog
,”
Appl. Opt.
60
(
16
),
4793
4797
(
2021
).
4.
F.
Helmchen
and
W.
Denk
, “
Deep tissue two-photon microscopy
,”
Nat. Methods
2
(
12
),
932
940
(
2005
).
5.
D. J.
Cuccia
,
F. P.
Bevilacqua
,
A. J.
Durkin
,
F. R.
Ayers
, and
B. J.
Tromberg
, “
Quantitation and mapping of tissue optical properties using modulated imaging
,”
J. Biomed. Opt.
14
(
2
),
024012
(
2009
).
6.
B. I.
Erkmen
, “
Computational ghost imaging for remote sensing
,”
J. Opt. Soc. Am. A
29
(
5
),
782
789
(
2012
).
7.
C.
Balas
, “
Review of biomedical optical imaging—A powerful, non-invasive, non-ionizing technology for improving in vivo diagnosis
,”
Meas. Sci. Technol.
20
(
10
),
104020
(
2009
).
8.
Z.
Yaqoob
,
D.
Psaltis
,
M. S.
Feld
, and
C.
Yang
, “
Optical phase conjugation for turbidity suppression in biological samples
,”
Nat. Photonics
2
,
110
115
(
2008
).
9.
J.
Bertolotti
,
E. G.
van Putten
,
C.
Blum
,
A.
Lagendijk
,
W. L.
Vos
, and
A. P.
Mosk
, “
Non-invasive imaging through opaque scattering layers
,”
Nature
491
,
232
234
(
2012
).
10.
E.
Edrei
and
G.
Scarcelli
, “
Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect
,”
Optica
3
(
1
),
71
74
(
2016
).
11.
T. B.
Pittman
,
Y. H.
Shih
,
D. V.
Strekalov
, and
A. V.
Sergienko
, “
Optical imaging by means of two-photon quantum entanglement
,”
Phys. Rev. A
52
(
5
),
3429
3432
(
1995
).
12.
D. V.
Strekalov
,
A. V.
Sergienko
,
D. N.
Klyshko
, and
Y. H.
Shih
, “
Observation of two-photon ‘ghost’ interference and diffraction
,”
Phys. Rev. Lett.
74
(
18
),
3600
3603
(
1995
).
13.
A.
Valencia
,
G.
Scarcelli
,
M.
D’Angelo
, and
Y. H.
Shih
, “
Two-photon imaging with thermal light
,”
Phys. Rev. Lett.
94
(
6
),
063601
(
2005
).
14.
B. I.
Erkmen
and
J. H.
Shapiro
, “
Ghost imaging: From quantum to classical to computational
,”
Adv. Opt. Photonics
2
(
4
),
405
450
(
2010
).
15.
J. H.
Shapiro
, “
Computational ghost imaging
,”
Phys. Rev. A
78
(
6
),
061802
(
2008
).
16.
P.
Ryczkowski
,
M.
Barbier
,
A. T.
Friberg
,
J. M.
Dudley
, and
G.
Genty
, “
Ghost imaging in the time domain
,”
Nat. Photonics
10
,
167
170
(
2016
).
17.
A. M.
Paniagua-Diaz
,
I.
Starshynov
,
N.
Fayard
,
A.
Goetschy
,
R.
Pierrat
,
R.
Carminati
, and
J.
Bertolotti
, “
Blind ghost imaging
,”
Optica
6
(
4
),
460
464
(
2019
).
18.
Y.
Peng
,
Y.
Xiao
, and
W.
Chen
, “
High-fidelity and high-robustness free-space ghost transmission in complex media with coherent light source using physics-driven untrained neural network
,”
Opt. Express
31
(
19
),
30735
30749
(
2023
).
19.
V.
Durán
,
F.
Soldevila
,
E.
Irles
,
P.
Clemente
,
E.
Tajahuerce
,
P.
Andrés
, and
J.
Lancis
, “
Compressive imaging in scattering media
,”
Opt. Express
23
(
11
),
14424
14433
(
2015
).
20.
F.
Ferri
,
D.
Magatti
,
L. A.
Lugiato
, and
A.
Gatti
, “
Differential ghost imaging
,”
Phys. Rev. Lett.
104
(
25
),
253603
(
2010
).
21.
B.
Sun
,
S. S.
Welsh
,
M. P.
Edgar
,
J. H.
Shapiro
, and
M. J.
Padgett
, “
Normalized ghost imaging
,”
Opt. Express
20
(
15
),
16892
16901
(
2012
).
22.
M. F.
Duarte
,
M. A.
Davenport
,
D.
Takhar
,
J. N.
Laska
,
T.
Sun
,
K. F.
Kelly
, and
R. G.
Baraniuk
, “
Single-pixel imaging via compressive sampling
,”
IEEE Signal Process. Mag.
25
(
2
),
83
91
(
2008
).
23.
O.
Katz
,
Y.
Bromberg
, and
Y.
Silberberg
, “
Compressive ghost imaging
,”
Appl. Phys. Lett.
95
(
13
),
131110
(
2009
).
24.
F.
Wang
,
C.
Wang
,
C.
Deng
,
S.
Han
, and
G.
Situ
, “
Single-pixel imaging using physics enhanced deep learning
,”
Photonics Res.
10
(
1
),
104
110
(
2022
).
25.
N.
Radwell
,
S. D.
Johnson
,
M. P.
Edgar
,
C. F.
Higham
,
R.
Murray-Smith
, and
M. J.
Padgett
, “
Deep learning optimized single-pixel LiDAR
,”
Appl. Phys. Lett.
115
(
23
),
231101
(
2019
).
26.
S.
Jiao
,
J.
Feng
,
Y.
Gao
,
T.
Lei
,
Z.
Xie
, and
X.
Yuan
, “
Optical machine learning with incoherent light and a single-pixel detector
,”
Opt. Lett.
44
(
21
),
5186
5189
(
2019
).
27.
D.
Ulyanov
,
A.
Vedaldi
, and
V.
Lempitsky
, “
Deep image prior
,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
(
IEEE
,
2018
), pp.
9446
9454
.
28.
S.
Liu
,
X.
Meng
,
Y.
Yin
,
H.
Wu
, and
W.
Jiang
, “
Computational ghost imaging based on an untrained neural network
,”
Opt. Lasers Eng.
147
,
106744
(
2021
).
29.
F.
Wang
,
C.
Wang
,
M.
Chen
,
W.
Gong
,
Y.
Zhang
,
S.
Han
, and
G.
Situ
, “
Far-field super-resolution ghost imaging with a deep neural network constraint
,”
Light: Sci. Appl.
11
,
1
(
2022
).
30.
A.
Zhang
,
Y.
He
,
L.
Wu
,
L.
Chen
, and
B.
Wang
, “
Tabletop x-ray ghost imaging with ultra-low radiation
,”
Optica
5
(
4
),
374
377
(
2018
).
31.
L.
Olivieri
,
J. S. T.
Gongora
,
L.
Peters
,
V.
Cecconi
,
A.
Cutrona
,
J.
Tunesi
,
R.
Tucker
,
A.
Pasquazi
, and
M.
Peccianti
, “
Hyperspectral terahertz microscopy via nonlinear ghost imaging
,”
Optica
7
(
2
),
186
191
(
2020
).
32.
F.
Rousset
,
N.
Ducros
,
F.
Peyrin
,
G.
Valentini
,
C.
D’Andrea
, and
A.
Farina
, “
Time-resolved multispectral imaging based on an adaptive single-pixel camera
,”
Opt. Express
26
(
8
),
10550
10558
(
2018
).
33.
N.
Radwell
,
K. J.
Mitchell
,
G. M.
Gibson
,
M. P.
Edgar
,
R.
Bowman
, and
M. J.
Padgett
, “
Single-pixel infrared and visible microscope
,”
Optica
1
(
5
),
285
289
(
2014
).
34.
Y.
Cao
,
Y.
Xiao
, and
W.
Chen
, “
Optical analog-signal transmission system in a dynamic and complex scattering environment using binary encoding with a modified differential method
,”
Opt. Express
31
(
10
),
16882
16896
(
2023
).
35.
H.
Liu
and
W.
Chen
, “
Optical ghost cryptography and steganography
,”
Opt. Lasers Eng.
130
,
106094
(
2020
).
36.
S.
Yuan
,
J.
Yao
,
X.
Liu
,
X.
Zhou
, and
Z.
Li
, “
Cryptanalysis and security enhancement of optical cryptography based on computational ghost imaging
,”
Opt. Commun.
365
,
180
185
(
2016
).
37.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
, “
U-Net: Convolutional networks for biomedical image segmentation
,” in
18th International Conference on Medical Image Computing and Computer-Assisted Intervention
(
Springer
,
2015
), pp.
234
241
.
38.
B. I.
Erkmen
and
J. H.
Shapiro
, “
Unified theory of ghost imaging with Gaussian-state light
,”
Phys. Rev. A
77
(
4
),
043809
(
2008
).
39.
Y.
Xiao
,
L.
Zhou
, and
W.
Chen
, “
High-resolution ghost imaging through complex scattering media via a temporal correction
,”
Opt. Lett.
47
(
15
),
3692
3695
(
2022
).
40.
K.
Dabov
,
A.
Foi
,
V.
Katkovnik
, and
K.
Egiazarian
, “
Image denoising by sparse 3-D transform-domain collaborative filtering
,”
IEEE Trans. Image Process.
16
(
8
),
2080
2095
(
2007
).
41.
G.
Mataev
,
P.
Milanfar
, and
M.
Elad
, “
DeepRED: Deep image prior powered by RED
,” in
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)
(
IEEE
,
2019
).
42.
S.
Boyd
,
N.
Parikh
,
E.
Chu
,
B.
Peleato
, and
J.
Eckstein
, “
Distributed optimization and statistical learning via the alternating direction method of multipliers
,”
Found. Trends Mach. Learn.
3
(
1
),
1
122
(
2011
).
43.
A.
Buades
,
B.
Coll
, and
J. M.
Morel
, “
A non-local algorithm for image denoising
,” in
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(
IEEE
,
2005
), Vol.
2
, pp.
60
65
.
44.
L.
Chen
,
X.
Chu
,
X.
Zhang
, and
J.
Sun
, “
Simple baselines for image restoration
,” in
European Conference on Computer Vision
(
Springer
,
2022
), Vol.
13667
, pp.
17
33
.
45.
D. P.
Kingma
and
J.
Ba
, “
Adam: A method for stochastic optimization
,” in
The 3rd International Conference on Learning Representations
,
2015
.
46.
Z.
Wang
,
A. C.
Bovik
,
H. R.
Sheikh
, and
E. P.
Simoncelli
, “
Image quality assessment: From error visibility to structural similarity
,”
IEEE Trans. Image Process.
13
(
4
),
600
612
(
2004
).
47.
B.
Redding
,
M. A.
Choma
, and
H.
Cao
, “
Speckle-free laser imaging using random laser illumination
,”
Nat. Photonics
6
,
355
359
(
2012
).
48.
Y.
Hao
and
W.
Chen
, “
A dual-modality optical system for single-pixel imaging and transmission through scattering media
,”
Opt. Lett.
49
,
371
374
(
2024
).
49.
Q.
Song
,
Q. H.
Liu
, and
W.
Chen
, “
High-resolution ghost imaging through dynamic and complex scattering media with adaptive moving average correction
,”
Appl. Phys. Lett.
124
,
211104
(
2024
).