A wavelength scanning Ptychographic Iterative Engine (ws-PIE) is proposed to reconstruct high-quality complex images of specimens. Compared with common ptychography, which required the user to transversely scan the sample during data acquisition, the ws-PIE fundamentally reduces the data acquisition time and can avoid the heavy dependence on the accuracy of the scanning mechanism. This method can be easily implemented in the field of material and biological science as the wavelength-swept laser source is currently commercially available. The feasibility of the ws-PIE is demonstrated numerically and experimentally.

## I. INTRODUCTION

Microscopy is important in identifying the detailed structures of interesting specimens ranging from inorganic materials such as polymers to organic structures such as cells and tissues. To suppress the influence of defocused images, most of the samples observed are sliced into very thin slides. However, the intensity image of the very thin samples regularly suffers from poor contrast due to the high transmission efficiency, which is why various dying materials are developed and applied to enhance the visibility of the intensity image acquired. Conversely, the phase image, which indicates the phase delay when light transmits through the sample and is independent of the absorption, can achieve a much better contrast, making the phase detection technique an important tool in the fields of material and biological science.^{1–5} Phase contrast imaging can be achieved via a number of methods such as interferometry,^{6} crystal analysis,^{7} Transport of Intensity Equation (TIE) solution,^{8,9} or lensless imaging.^{10}

Coherent diffractive imaging (CDI) is a lensless imaging technique that reconstructs the complex field from the recorded intensity via iterative approaches. Due to the advantages of the simplicity of the experimental setup and the low requirements on the working environment and independence of the image resolution to the quality of the optics used, CDI has a broad range of applications in many research fields.^{11,12} However, traditional CDI algorithms suffer from low convergence speed, computation stagnation, and limited field of view, and for samples with complicated structures, traditional CDI cannot generate reliable images. To overcome these bottlenecks, the Ptychographic Iterative Engine (PIE) and extended PIE (ePIE) algorithms were proposed.^{13,14} During the data acquisition process, the PIE scans the sample through a localized illumination probe to a grid of positions, while recording the diffraction pattern intensity in the far field. When there is a proper ratio of overlap between two adjacent illuminating areas, both the illuminating probe and complex transmission of the object can be faithfully reconstructed with the two counterpart updating formulas iteratively. Except for the fast convergence speed in the computation, the strong immunity to coherent noise and high reliability in imaging the complicated structure samples, the PIE also has an extended large field of view and can decouple the transmission function of the sample from the illumination, which cannot be realized with other imaging techniques. Because of these outstanding advantages, the PIE has attracted a great deal of attention in various research fields and is widely studied worldwide.^{15–18} While having these outstanding advantages, the PIE also suffers from several clear shortcomings. The first problem is the time-consuming data acquisition process; for a typical mechanical scanning-based PIE experiment, the time required to record 10 × 10 diffraction pattern frames is always about 10 min, which leads to a fairly high requirement for the stability of the imaging system and working environment. The second problem is its heavy dependency on the accuracy of the scanning mechanism. Because of the existence of backlash error or the hysteresis of the translation stage, the positions of the sample cannot be exactly known during the data acquisition process, which makes the resolution of the final reconstruction much lower than that determined by the numerical aperture of the imaging system. Though the annealing method^{19} and cross correlation searching method^{20} are proposed to correct the scanning positions, they can only work well when the positioning error is small. Another innovation about PIE is that the scanning process can be achieved in frequency domain. Using this concept, Fourier Ptychography Microscopy (FPM) has been developed recently.^{21,22} Despite the advantage of large field of view, high spatial bandwidth product, FPM is also limited to the assumption of fully coherent illumination, which is not practical for LED array. And the algorithm of FPM assumes that the imaging system has same optical transfer function for different incident angles, so the inaccurate determination of phase slope of LED light could affect the final resolution.

In this paper, a wavelength scanning PIE (ws-PIE) phase retrieval method is proposed to overcome the shortcomings of the common PIE on the time-consuming data acquisition and positioning error. In this proposed method, laser beams of a series of wavelengths are individually incident on the sample to study, while the diffraction patterns formed by the exiting light are recorded at the same time. The complex transmission function of the sample and that of the illuminating beams can all be reconstructed faithfully and quickly. As the updating formulas adopted in the iterative computation are equivalent to that of the standard ePIE algorithm, and similar to ePIE, this proposed method can also decouple the object function from the illumination while traditional CDI techniques can only reconstruct the exiting field (the product of the illumination function and the object function). Therefore, this proposed method is essentially a modified PIE algorithm though its setup is quite different from that of common PIE imaging at first glance. As the switching of the wavelength and the exposure of the detector are realized automatically, the whole data acquisition process can be realized in approximately 10 s, distinctively increasing the data acquisition speed and thus remarkably losing the requirements on the stability of the imaging system and working environment. At the same time, as there is no scanning of the illuminating beam relative to the sample, the complicated time-consuming computation of the position correction is no longer required, which also increased the reconstruction speed remarkably. In other words, the two main shortcomings of the common PIE algorithm can be easily overcome via the proposed wavelength scanning method. Furthermore, the critical novelty of ws-PIE over FPM is the simple scheme, which means less error source and robust stability. Although FPM could provide dramatic enhancement of NA for present imaging system, but it always has certain limitations especially for large incident angle of LED light. The concept of change wavelength rather than scanning in space or frequency domain is simple to be implemented and the simple setup provides more chance to combine this technique with existing modalities.

A multi-wavelength illumination (or wavelength-swept laser source) is commonly used to image the 3D inner structure of the sample for the distribution of absorption, the reflectance, and refraction index with the optical coherence tomography (OCT) technique, digital holography, and diffraction phase microscopy.^{23–27} Here, the wide spectrum generates a narrow coherent gate to slice the sample into many layers and images them separately or extracts the height map from the measured phase map with Δφ(*x*, *y*) = Δ*n*·(2*π*/*λ*)·Δ*h*(*x*, *y*), where *λ* is the wavelength of the light source and Δ*n* is the refractive index difference between substrate and investigated sample.^{27} However, in all these techniques, the phase measurement is realized with classical interferometry, and the measurement accuracy heavily depends on the regularity of the wavefront of the reference beam, which is used to generate interference fringes in the temporal or spatial domain. The possibility of using multiple wavelengths to realize traditional CDI is explored prior to the experiment to avoid the axial movement of the detector.^{28–30} However, it is difficult to decouple the object function from the illumination function without additional measurement in conventional CDI, and all illuminating beams of different wavelengths are assumed to be ideally planar beams, but for practical wavelength scanning laser sources such as optical parametric oscillators (OPO) and other tunable laser sources, this assumption is clearly unreasonable.^{31,32} Furthermore, the quality of the reconstruction is not ideal. Compared to interferometry and traditional CDI, the PIE can retrieve the pure transmission function of the object by decoupling the illuminating distribution from the exiting complex field; thus, it is more flexible for the applications and has been successfully adopted to realize 3D imaging of a sample via retrieving its transmitted complex field.^{33,34} In this paper, we demonstrate that the performance of the PIE can be remarkably improved in both the data acquisition and reconstruction by using the wavelength scanning laser source.

## II. BASIC PRINCIPLE

The setup of the ws-PIE is schematically shown in Figure 1, where the roughly planar laser beam of $\lambda m|m=1,\u2026,M$ incident on a pinhole *H*(**r**) creates a diffracted wave-front *P*(**r**) at the sample location, where **r** = (*x*, *y*) indicates real-space coordinates. A charge coupled device (CCD) located downstream of the specimen records a set of diffraction patterns denoted by *I*(**u**, *λ*_{m}), forming the inputs to the proposed algorithm, where **u** = (*u*, *v*) is the reciprocal space vector. The exiting wave of the specimen $\psi (r,\lambda m)$ is related to the measured Fraunhofer diffraction patterns by Equation (1);^{35} *z*_{1} and *z*_{2} are illustrated in Figure 1,

It has been demonstrated that when the radiation source is monochromatic with wavelength *λ*_{0}, $\psi (\mathbf{r},\lambda m)$ can be reconstructed from a sequence of interferograms recorded at different planes located at *z*_{m} along the optical axis.^{36,37} At first glance, as the wavelength and the distance emerge in the form of *λ* × *z*_{2} in Equation (1), the variance in the propagation distance *z* has the same effect as that of the variance in the wavelength *λ*, i.e., the same interferograms can be acquired at the fixed distance *z*_{0} with a set of illumination wavelengths *λ*_{m} = *λ*_{0}*z*_{m}/*z*_{0}. This is the basic principle of multi-wavelength CDI. However, as the exiting field $\psi (\mathbf{r},\lambda m)$ is the product of the objection function *O*(**r**, *λ*_{m}) and illumination function *P*(**r**, *λ*_{m}), which is the diffraction of the pinhole *H*(**r**) and always changes with wavelength, it cannot be exactly known in traditional CDI experiments. Thus, without decoupling the *P*(**r**, *λ*_{m}) function from the exiting field $\psi (\mathbf{r},\lambda m)$, the objection function *O*(**r**, *λ*_{m}) cannot be measured accurately with traditional CDI using the wavelength scanning scheme. With the PIE algorithms,^{38} the transmission function of the object *O*(**r**, *λ*_{m}) can be decoupled from the illumination function *P*(**r**, *λ*_{m}) iteratively with the updating formula,

where *P**(**r**, *λ*) is the conjugate of *P*(**r**, *λ*), *α* is used to avoid a divide-by-zero situation, and *β* controls the amount of feedback. For simulations and experiments within this paper, we choose *α* = 1, *β* = 1. $\psi n(\mathbf{r},\lambda )$ denotes the exit surface wave (ESW) in the forward computation, and $\psi n\u2032(\mathbf{r},\lambda )$ is the updated ESW obtained after applying the diffraction intensity constraint $I(\mathbf{u},\lambda )$. Equation (2) is essentially a steepest descent algorithm^{39} with a spatially variant step size of |P(**r**, λ)|/max_{r}|P(**r**, λ)|. When the illumination P(**r**, λ) changes with wavelength, the step size changes accordingly, thus increasing the possibility of escaping the local minima. This is similar to the PIE, where the varying step is realized by shifting the probe transversely relative to the sample.

The distribution of *P*(**r**, *λ*_{m}) is the diffraction of the pinhole *H*(**r**), whose shape can be exactly measured prior to the experiment; therefore, *P*(**r**, *λ*_{m}) can be accurately reconstructed using the Fienup error reduction (ER) algorithm in the iterative computation by propagating it to the pinhole plane and using the shape of the pinhole as the spatial constraint.

In the experiments, after all the diffraction patterns are recorded while scanning the wavelengths from *λ*_{1} to *λ*_{M}, the reconstruction process starts with the initial guess *O*_{1}(**r**, *λ*_{1}) and *P*_{1}(**r**, *λ*_{1}) for the object transmission function and illumination probe, respectively. Zero matrix is used for initial guess of object and probe for simulations and experiments in this paper. The iterative computation is carried out for the *n*th iteration and the illumination of $\lambda mth$ with the following steps:

Propagate the ESW of the object $\psi n(\mathbf{r},\lambda m)=On(\mathbf{r},\lambda m)Pn(\mathbf{r},\lambda m)$ to the detector plane Ψ

_{n}(**u**,*λ*_{m}) = F{$\psi n(\mathbf{r},\lambda m)$}, where F denotes the Fourier transform between the two concerned planes;replace the modulus of Ψ

_{n}(**u**, λ_{m}) with the square-root of I(**u**, λ_{m}), Ψ_{n}(**u**, λ_{m}) = sqrt[I(**u**, λ_{m})] F{$\psi n(\mathbf{r},\lambda m)$}/|F{$\psi n(\mathbf{r},\lambda m)$}| and propagate it back to the object plane to obtain an improved ESW $\psi n\u2032(\mathbf{r},\lambda m)=F\u22121{\Psi (\mathbf{u},\lambda m)}$, where*F*^{−1}denotes the inverse Fourier transform;- the transmission function of the object and the illumination probe are updated aswhere(3)$On\u2032(\mathbf{r},\lambda m)=On(\mathbf{r},\lambda m)+|Pn(\mathbf{r},\lambda m)||Pn(\mathbf{r},\lambda m)max||Pn*(\mathbf{r},\lambda m)||Pn(\mathbf{r},\lambda m)|2+\alpha \xd7\beta [\psi n\u2032(\mathbf{r},\lambda m)\u2212\psi n(\mathbf{r},\lambda m)],Pn\u2032(\mathbf{r},\lambda m)=Pn(\mathbf{r},\lambda m)+|On(\mathbf{r},\lambda m)||On(\mathbf{r},\lambda m)max||On*(\mathbf{r},\lambda m)||On(\mathbf{r},\lambda m)|2+\alpha \xd7\beta [\psi n\u2032(\mathbf{r},\lambda m)\u2212\psi n(\mathbf{r},\lambda m)],$
*α*and*β*have been defined in the paragraph under Equation (2); - propagate
*P′*(**r**,*λ*_{m}) back to the pinhole plane and enforce it to conform to the measured shape of the pinhole*H*(**r**) using the ER algorithm and propagate it to the sample plane to obtain the improved illumination,(4)$Pn+1(\mathbf{r},\lambda m)=F{F\u22121{Pn\u2032(\mathbf{r},\lambda m)}\xd7H(\mathbf{r})},$ - deduce the object function of the (
*m*+ 1)th wavelength with Equation (5) and jump to step (1) to repeat the above computation until each*I*(**u**,*λ*_{m}) has been addressed, completing a single round of wavelength scanning iteration,(5)$On(\mathbf{r},\lambda m+1)=|On\u2032(\mathbf{r},\lambda m)|exp{iarg[On\u2032(\mathbf{r},\lambda m)]\lambda m\lambda m+1},$ - to test the accuracy of the reconstruction results, the normalized error in the detector plane is applied. For the
*λ*th and*n*th iteration, it is calculated as follows:(6)$EF=\u2211u|I(\mathbf{u},\lambda m)\u2212|\Psi n(\mathbf{u},\lambda m)||2\u2211uI(\mathbf{u},\lambda m).$

Such an iteration process is instructed to follow a snake-like pattern, which means that the first iteration includes one complete set of sequential propagations over M wavelengths in the forward direction (i.e., first wavelength to second wavelength, and so on, until the Mth wavelength), and the second iteration calculates the wavefront from the Mth to (M − 1)th wavelength, and so on until the first wavelength. Once reaching the first wavelength, the process is in the forward direction for the third iteration.

In the iterative computation described above, the absorption of the sample is assumed to be equivalent for all wavelengths used, and this assumption is reasonably valid when the scanning range of the wavelength is only in the tens of nanometers.^{40} Theoretically, the phase retardation effect for the two different wavelengths should be expressed as $\Delta \phi (\lambda m+1)=\Delta \phi (\lambda m)\lambda m\lambda m+1(1+\Delta nn\lambda m)$, where $\Delta n=n\lambda m+1\u2212n\lambda m$, $n\lambda m$ represents the refractive index (RI) of the specimen at $\lambda m$, and Δn is the variance due to chromatic aberration. As the dispersion property of most of the samples is difficult to know exactly, the conversion in Equation (5) only works when Δn is very tiny, that is, the wavelength scanning range cannot be very large. Take a wet mesophyll cell wall as an example. Its refraction index at 495 nm is 1.416,^{41–43} and the Δn is around 0.001 over the 495–815 nm wavelength range. Therefore, Δ*n*/*n _{λ}* is about 7 × 10

^{−4}. If the thickness of the sample is assumed as 4.95

*μ*m (10 wavelengths in thickness), the phase change related to the dispersion of the sample is about 3 × 10

^{−3}, which is still much smaller than the sensitivity of the PIE. Therefore, for our experiment, which is carried out with the wavelength ranging from 700 to 790 nm, the phase error induced by neglecting the dispersion of the sample is less than 0.001, and thus Equation (5) can work well. The iteration is given as a flow chart in Figure 2.

## III. NUMERICAL SIMULATION

A series of simulations were carried out to demonstrate the feasibility of the proposed wavelength scanning method. The parameters chosen for simulations coincide with the real experiment, that is, the wavelength ranges from 700 to 790 nm. Figures 3(a) and 3(b) present the intensity and phase of the transmission function of the object *O*(**r**, *λ*), simulated for a wavelength of 700 nm, where the water drop texture represents the modulus (varying from 0 to 255) and the stem cross section of the sunflower epidermis represents the phase (varying from 0 to 4*π* for 700 nm).

The illuminating field on the sample P(r, λ) is generated by computing the Fresnel integral for a plane wave incident on a circular pinhole aperture H(r) (1 mm in diameter) with a propagation distance of 20 mm from the pinhole to the sample. The detector was placed at a distance of 196 mm behind the sample. Figure 3(c) presents the calculated diffraction pattern I(u, λ) for the wavelength of 700 nm. The wavelength of illumination scans from 700 to 790 nm in 10-nm steps. The intensity transmission efficiency is assumed to be identical while the phase is varied with wavelength using Equation (5). To simulate the disturbance in the optical bench experiment, the dispersion of the sample is assumed to be Δn = 1.0 × 10^{−4}/nm and the RI at 700 nm is 1.411; a small phase ramp was also randomly added to the wavefront of each illumination to simulate the phase change for different wavelengths. Figures 3(d) and 3(e) present the numerically reconstructed images. According to Equation (2), Wiener-type weighting function is introduced for the update of object function, meaning that the variation speed of retrieved object function correlates the intensity of illumination. Thus, the section of object corresponding to stronger illumination has faster convergence speed and higher signal-noise ratio while the remaining section is blurred by noise more easily. Consequently, the noise in the boundary corresponding to weak illumination probe decreases the contrast in the reconstructed images while the center area illuminated by strong probe is much more clear and identical to actual distribution shown in Fig. 3(a) and 3(b). Figure 3(f) shows the reconstructed illumination at 750 nm on the sample plane. The object images and the illuminating beams of the other wavelengths were also reconstructed but not shown here for clarity.

To quantitatively investigate the performance of the suggested ws-PIE method, we calculate the reconstruction errors varying with the occurring iterations. The results are shown in Figure 4, where Figure 4(a) shows the change in convergence for the different wavelength scanning range and Figure 4(b) shows the change in convergence for the different number of wavelengths used. We can observe that the wavelength scanning range of 100 nm can generate the best reconstruction. When the wavelength range is too large, the dispersion of the sample becomes non-negligible and Equation (5) will not work. Conversely, when the wavelength range is too narrow, the information embedded in the diversity of the recorded diffraction is too limited to retrieve accurate complex fields. In Figure 4(b) where the wavelength scanning range is fixed between 700 and 800 nm, the number of wavelengths used for the imaging takes the values of 5, 8, 10, 15, and 18, respectively. We can deduce that when five wavelengths are used to carry out the imaging, the recorded diffraction cannot provide enough information to obtain an accurate reconstruction. On the contrary, when the number of wavelengths used for the imaging is larger than eight in this simulated case, there is no remarkable further improvement in the reconstruction quality with an increasing wavelength number. This is because, with the increasing wavelength number, the difference between the diffraction patterns of the two neighboring wavelengths becomes tiny; thus; there is little increase in additional information. According to the result of Figure (4), an ideal reconstruction can be achieved by using 10 wavelengths within the range of 700–790 nm.

## IV. PROOF OF PRINCIPLE EXPERIMENT

The feasibility of the ws-PIE was demonstrated with a proof-of-principle experiment on the optical bench using optimized parameters obtained in the above simulations. The shape of the pinhole *H*(**r**) shown in Figure 5(a) was measured accurately with the ePIE method at a wavelength of 700 nm. These accurate aperture data act as a spatial constraint in the iterative computation to reconstruct the illuminating beams. The radiation source was the OPO (EKSPLA PT257, with a tunable wavelength from 690 to 990 nm) and each wavelength was monitored in line with the spectrometer to avoid the wavelength drift.^{44} High coherence of illumination source is crucial for any coherent diffractive imaging system, and partial coherence will lead to blurry diffraction patterns and reduced reconstruction contrast. The typical pulse duration of the OPO source used in our experiment is 3-4 ps and typical time bandwidth product is less than 0.8 according to the manufacture’s sheet; hence, the maximum bandwidth is less than 0.54 nm. Taking λ = 700 nm as an example, the real wavelength ranges from 0.9996λ to 1.0004λ, which could be ignored due to no apparent artifacts on diffraction pattern. The diffraction pattern was recorded by a CCD camera (AVT GC780; 582 × 782 array; pixel size 8.3 *μ*m). Ten laser beams with wavelengths ranging from 700 to 790 nm at a step size of 10 nm were individually incident on a fixed biological sample of a dragonfly wing to form the diffraction patterns. Figures 5(b) and 5(c) show the reconstructed modulus and phase images of the sample at a illumination wavelength of 750 nm, where the network of fine veins forming a group of hexagonal cells is revealed clearly. Figures 5(d)–5(f) show three recorded diffraction patterns at 700, 750, and 790 nm, respectively, and the corresponding retrieved modulus of the illumination wave field *P*(**r**, *λ*_{m}) incident on the specimen is shown in Figures 5(g)–5(i), where we can observe the remarkable difference between them. This difference is the reason why an ideal reconstruction cannot be achieved using traditional CDI with the wavelength scanning scheme.

To check the resolving capability of the proposed ws-PIE, we repeated the above experiment using the USAF 1951 resolution target, and the reconstructed image is shown in Figure 6(a). For comparison, the resolution target was also imaged with the common transverse scanning ePIE method at a wavelength of 700 nm, and the reconstructed image is shown in Figure 6(b). We can find that there is almost no remarkable difference between these two images. The finest resolved features in the ws-PIE are located in element 6 of group 5, corresponding to a resolution of 8.77 *μ*m, and the resolvability of the common PIE is a little lower than the ws-PIE due to the positioning error of the translation stage. However, the data acquisition time for the ws-PIE is less than 10 s, while for the common PIE algorithm, the data acquisition time required is about 10 min. Furthermore, the resolution for ws-PIE and common PIE is much lower than the expected diffraction limit, 350 nm (λ/2), due to the limited aperture size of CCD and unavoidable noise in the diffraction patterns, which leads to the absence of high frequency components of diffraction wave and hence reduces the image resolution. As no optical magnification was adopted in the above experiments, the resolution achieved is not very high compared to common microscopic images. If an objective is applied to pre-magnify the sample, the resolution will be distinctively improved.

To illustrate the validity of this method in imaging the optically thicker specimens, we also repeated the above experiment by using a bee’s leg as the specimen. The reconstructed modulus and phase images are shown in Figures 7(a) and 7(b), respectively, where the pollen combs are clearly revealed and the reconstruction quality is identical to that reported in the literature.^{12} This experiment clearly demonstrates the wide applicability of the proposed ws-PIE, especially in the regime of light.

## V. CONCLUSION

A ws-PIE technique is proposed to overcome the shortcomings of the common mechanical scanning-based PIE on the time-consuming data acquisition and positioning errors. By replacing the mechanical scanning with wavelength scanning, the data acquisition speed can be improved by about sixty times. At the same time, as there is not any mechanical movement during the whole data acquisition process, the resolution degradation induced by the positioning error is totally avoided. The feasibility of this proposed method is verified with both the numerical simulation and experiment on the optical bench. As both the data acquisition speed and spatial resolution are improved, the proposed technique is suitable for many applications in the field of wavefront detection, material science, and biomedical imaging. For future work, we will focus on combing ws-PIE with the existing commercialized system such as lens-based microscope to enhance the resolution and boost further applications. Besides the quantitative limits and error analysis, spatial resolution, phase accuracy, and the influence of laser source bandwidth are the subsequent crucial research work.

## ACKNOWLEDGMENTS

This work is supported by the National Natural Science Foundation of China (Grant No. 61675215) and supported by CAS under Grant No. 1Z1629051A0001.

## REFERENCES

*et al.*, “