Holographic imaging techniques, which exploit the coherence properties of light, enable the reconstruction of the 3D scenery being viewed. While the standard approaches for the recording of holographic images require the superposition of scattered light with a reference field, heterodyne detection techniques enable direct measurement of the amplitude and relative phase of the electric light field. Here, we explore heterodyne Fourier imaging and its capabilities using active illumination with continuous-wave radiation at 300 GHz and a raster-scanned antenna-coupled field-effect transistor (TeraFET) for phase-sensitive detection. We demonstrate that the numerical reconstruction of the scenery provides access to depth resolution together with the capability to numerically refocus the image and the capability to detect an object obscured by another object in the beam path. In addition, the digital refocusing capability allows us to employ Fourier imaging also in the case of small lens-object distances (virtual imaging regime), thus allowing high spatial frequencies to pass through the lens, which results in enhanced lateral resolution.

## I. INTRODUCTION

Imaging at terahertz (THz) frequencies offers interesting features for technical applications. Among these are the transparency of many materials, which provides the ability to look into and through objects such as packages and technical materials; the existence of spectral fingerprints for the identification of substances; and the weak scattering by dust, smoke, and mist, which makes THz radiation interesting for imaging under challenging ambient conditions in a radarlike fashion^{1} but with better spatial resolution. The latter could be promising, e.g., for robotics and autonomous vehicles. Many of these envisaged applications would operate in the deep subterahertz frequency regime (approximately 0.1–0.6 THz), where the large wavelength of the radiation makes it difficult and expensive to build detector arrays that exhibit a sufficient number of detectors needed for high frame rates together with a good image quality, as defined by a sufficiently large field-of-view with acceptable spatial resolution. For this reason, the integrated detector arrays that are currently being available,^{2–7} with pixel numbers of less than 10 000, are only of limited usefulness for standard two-dimensional (2D) imaging as well as for three-dimensional (3D) computed tomography^{8,9} because the detectors cover only a small field-of-view with good resolution.

An alternative is Fourier imaging^{10–12} with the detector array being positioned in the focal plane (coordinates: *x*, *y*) of the optical system, where the beam cross section is smallest along the entire beam path. As with other coherent microwave and THz imaging techniques such as phase-shift interference imaging,^{13,14} holography,^{15} in-line holography^{16,17} as well as off-axis holography,^{18–21} near-field holography,^{22} and time-reversal holographic imaging of hidden objects,^{23} the image is then calculated numerically from the recorded data. The field-of-view is determined by the pixel pitch with which the field has been recorded and not by the size of the detector array as in conventional imaging (with *conventional imaging*, we mean here and in the following the recording of the spatial distribution of the radiation intensity in the image plane of the lens for a given object distance).

Fourier imaging offers the possibility of 3D imaging. The distance information is encoded in the spatial phase distribution in the focal plane of the imaging system (“lens”). It is contained mathematically in the second factor of the following equation, which describes the focal-plane field distribution in a thin-lens approximation:

*k* = 2*π*/*λ* is the wave-number. *T*(*X*, *Y*) denotes the transmitted field from an object plane (coordinates: *X*, *Y*) located at a distance *d* in front of the lens with focal length *f*. The integral represents the (aperture-truncated) Fourier transform of the complex-valued field at the object plane.

The distance-encoding in the phase information is illustrated with the simulations shown in Fig. 1. Figure 1(a) displays the phase profile of a circular aperture centered on the optical axis. Figure 1(b) plots the phase term $k2f(1\u2212d\u2009/\u2009f)(x2+y2)$ for different object locations. For an object placed at the front focal plane (*d* = *f*), one obtains a flat phase contribution. Away from the focal plane, the phase has a paraboloidal shape, with the sign and magnitude of the curvature unambiguously dependent on *d*. This phase contribution is to be added to that determined by the object itself, which in our case is the circular aperture [with the phase profile of Fig. 1(a)]. Figure 1(c) shows the total phase in the Fourier plane for the five distances of the aperture of Fig. 1(b). Comparing the resultant phase contours, one can clearly see the influence of the distance. For more complex objects, it is usually not straightforward to visually identify the distance information in a phase image without the help of a suitable computer-based analysis.

In this paper, the heterodyne detection technique is adopted to directly measure both the amplitude and phase of the Fourier spectrum rather than an interference pattern generated by superposition of detection and reference fields as in standard holography. We exploit the phase information in order to numerically focus onto different objects in the beam path. We show that the proposed method is capable of imaging the contours of several opaque targets in the beam path if enough scattered light from the targets arrives at the detector.

## II. EXPERIMENTAL SETUP

We investigate terahertz Fourier imaging experimentally at 300 GHz. A single-pixel TeraFET detector operated in the heterodyne mode^{24–28} is raster-scanned across the focal plane of an imaging lens and records the amplitude and phase of the radiation arriving from the scene under investigation. The complex-valued focal-plane field map allows then reconstruction of the object with an inverse Fourier transform method (see Appendix A). The method is different from single-pixel imaging where a power detector is located at a fixed position and time-varied intensity or phase patterns are projected onto it.^{29–31}

The measurement setup is displayed in Fig. 2. For object illumination, we use an electrical multiplier-chain source (S1, vendor: Virginia Diodes, Inc.; base frequency of 16.66 GHz generated by a HP synthesizer; frequency multiplication factor: 18×; output power: 1 mW) whose continuous-wave (CW) radiation leaves the horn antenna with a 20° divergence angle. The radiation is collimated with a 3-in.-diameter aspherical Teflon lens (L1, from Thorlabs, Inc., *f* = 15 cm). The resultant beam diameter is about 5.5 cm. Another 3-in. Teflon lens (L2, same parameters as L1, numerical aperture N.A. of 0.254) focuses the transmitted radiation onto a TeraFET detector^{32–34} consisting of a Si CMOS field-effect transistor with a monolithically integrated dipole antenna. The radiation is coupled into the TeraFET through its weakly doped substrate to which a silicon substrate lens^{35,36} (diameter: 4 mm) is attached. A second multiplier-chain source (S2, vendor: RPG-Radiometer Physics GmbH, base frequency of 16.66 GHz +1 kHz generated by a second HP synthesizer, frequency multiplication factor: 18×, output power: 432 *μ*W) serves as a local oscillator, and its radiation is focused with a third Teflon lens (L3, *f* = 10 cm) from the front side onto the active region of the TeraFET. The difference-frequency signal from the TeraFET is then integrated in a lock-in amplifier (PerkinElmer, integration time: 50 ms, providing a dynamic range of 60 dB) whose reference signal at 1 kHz is obtained by mixing the drive signals from the two synthesizers in an MARKI electrical mixer. The lock-in amplifier works at the 18th harmonic of the reference signal.

The detector together with the lens L3 and the radiation source S2 is mounted on a translation stage that allows us to perform raster-scan imaging of the amplitude and phase of the radiation field across the focal plane (*xy*-plane) of the lens L2. The step width is 1 mm in both the *x*- and *y*-directions. The scans cover an area of 80 × 80 mm^{2}, and the total data recording time is 30 min (single scan).

A major challenge of these measurements has been the long coherence length of the radiation resulting from the narrow bandwidth of the radiation of 1 MHz, which leads to highly problematic interference effects in the form of standing waves. These ruin the phase distribution in the Fourier spectrum, resulting in inaccurate recovery results as the phase spectrum plays the crucial role in the calculations of the Fourier transformation. We have observed standing-wave-based signal variations upon lateral translation of the detector unit already without any object-under-test in the illuminating beam path. Assuming that these standing waves mainly arise from multiple reflections of radiation between the emitter unit (S1 and L1) and the receiver unit (L2 and detector), we have placed a homemade metal film attenuator (14 nm of chromium on a 4-*μ*m-thick polypropylene membrane) into the path of the collimated illuminating beam. We have found that the standing-wave effects thus are reduced to tolerable levels. This is also the case after the objects-under-test have been positioned in the beam path. Therefore, we have used this attenuator in all experiments, mounting it in front of the objects to be imaged at an angle of about 25° relative to the optical axis (with the tilt, the reflected radiation is steered out of the beam path). For each pass, the intensity of the radiation propagating through the attenuator is reduced to 40% of the incoming intensity. Back-reflected radiation passes the attenuator twice more per round-tip and is reduced in intensity each time by a factor of (1/0.4)^{2} = 6.25, which significantly weakens standing-wave modulations. Also, for the sake of minimizing standing waves, all planar imaging targets have been positioned with a slight tilt into the beam (at an angle of 5°–10° toward the optical axis). This tilting of the objects has been taken into account in the Fourier reconstruction algorithm.

## III. RESULTS

### A. Lateral and depth resolution

In a first experiment, we demonstrate that a planar object hidden between sheets of plastic foam, which is opaque in the visible but transparent at terahertz frequencies, can be detected with good spatial resolution using Fourier imaging. For this measurement, a USAF 1951 resolution test chart, imprinted in the metal of a printed circuit board, is embedded between two 2-cm-thick layers of polyurethane foam and placed into the collimated illumination beam 10 cm in front of the lens L2. Figure 3 shows three segments of the chart of which images are taken and the three reconstructed intensity images. They are corrected for the Gaussian intensity profile of the beam measured previously. Given the width of the metal structures (the minus sign of number 3 as well as the stripes in front of number 1 have a width of 2 mm, while that of numbers 1, 4, 5, and 6 is less than 1 mm), one determines the resolution to be on the order of 2 mm.

In Appendix B, we give a derivation of the lateral resolution, i.e., the smallest measurable distance between neighboring points in the object plane, expected for our experimental parameters. We obtain a theoretical value of 1.875 mm, which is indeed close to the 2 mm estimated above from the experimental data.

In a second experiment, we investigate numerical refocusing during image reconstruction and the effect of the variation of the assumed object distance onto the lateral spatial resolution. The object-under-test is the Siemens star shown in Fig. 4(a). The area within the dashed circle is the imaged region (whose size is determined by the optical system). The center of the Siemens star has been shifted a little bit off the optical axis in order to cover a region of the star with a strong variation of the pattern size. Figure 4(b) displays an intensity image taken in the conventional way of a 4*f*-setup (object and image distance equal to 2*f*) based on the same collimated illumination beam and using the same focusing lens L2 of the Fourier imaging system. The image has been corrected for the Gaussian intensity profile of the collimated illumination (measured without the object in the beam path). The circular structures (blotchy white rings) in the figure come from the diffraction at the edge of the collimation lens.

Figure 5 shows a series of reconstructed images obtained with the Fourier method. With Eq. (D1) given in Appendix D, one calculates that the expected depth resolution of our experiment is 0.7 cm (assuming the values 2|*y*_{max}| = 80 mm, *f* = 15 cm, and *λ* = 1 mm, see Appendix D for details). Based on this expected resolution, the Siemens star has been placed at different distances *d* from the lens L2 ranging from 9 cm to 12 cm in 1-cm steps. For each of the Fourier maps, we then assume these four values of *d* also for the reconstruction. This yields the matrix plot of Fig. 5 where the variation of the object positions occurs along the columns and that of the recovery distance along the rows. Object and recovery distances coincide along the diagonal of the matrix plot. Similar to physical defocusing, an unblurred image is obtained only in this case, and the image quality degrades, if the recovery distance deviates from the object distance.

Figure 6 compares the lateral spatial resolution achieved by conventional 4*f*-imaging with that of the object reconstruction by Fourier imaging. The comparison is performed with the help of circular line scans in the respective intensity images of Fig. 4(b) and the bottom right of Fig. 5. The circles have radii of 6.4 mm, 12.7 mm, 25.5 mm, and 31.8 mm, corresponding to a local width of the triangular metal patches of the Siemens star of 1 mm, 2 mm, 4 mm, and 5 mm (each patch covers a central angle of 9°). The position of each arc is marked by a dashed or solid red, green, blue, or black line in Fig. 4(b) and the bottom right of Fig. 5, dashed lines for the conventional image and solid lines for the Fourier reconstruction. Considering first the conventional image, one finds that the line scans along the two arcs with the smallest radii show hardly any modulation, while a modulation is visible for the other two line scans. The bright-dark pattern of the metal-dielectric sequence is not fully resolved in any line scan. From these findings, one estimates that the spatial resolution is about 4 mm. Using the knife-edge method, one also finds a resolution of about 4 mm, which is in agreement with the 3.95-mm resolution calculated for the 4*f*-geometry (see Appendix B) with a 3-in. focusing lens aperture and a 15-cm focal length at 300 GHz. In contrast, the line scans of the reconstructed Fourier image exhibit a more pronounced modulation, even revealing sharp dark-bright modulation for the two arcs with the smallest radii. These observations suggest a resolution on the order of 2 mm, which is in good agreement with the calculated resolution of 1.875 mm (see Appendix B).

For the parameters chosen, the Fourier imaging system achieves a more than twice better resolution than the 4*f*-imaging system built with the same imaging source and imaging lens and using the same data recording area (in one case in the image plane and in the other in the focal plane). The better resolution of the Fourier image is entirely a consequence of the fact that the object has been placed closer to the lens. In fact, we have chosen *d* < *f*, which for conventional imaging implies the generation of a virtual image where direct image recording with a sensor matrix is not possible anymore. Conventional imaging reaches maximal resolution for *d* = *f*. With Fourier imaging, using the same lens, an improved resolution beyond this limit is possible.

### B. Detection of obscured objects via computational refocusing

In a last experiment, we demonstrate the capability of Fourier imaging to allow detection of objects in the line-of-sight of other ones by numerical focusing, provided enough scattered radiation of the objects reaches the detector. A steel screw is placed at *d* = 18 cm in front of the lens L2 and a steel washer at *d* = 16 cm. The objects are held in place in the beam path by polyurethane foam, which they have been pressed into. The washer blocks the line-of-sight of the screw toward the lens in the direction parallel to the optical axis of the imaging system. This arrangement is indicated in Fig. 7(a). The intensity and phase images of the washer and the screw recovered from a single frame recorded in the focal plane of lens L2 are shown in Figs. 7(b)–7(e). Since the washer and the screw are opaque to terahertz waves, traditional imaging would hardly allow us to identify the screw behind the washer. However, the complex-valued field map of the focal plane contains enough scattered plus the phase-encoded distance information that it is possible to extract the information about the screw’s contour upon numerical focusing into the plane where it is located. The ability to distinguish the two objects vanishes as the objects are moved closer together, and less scattered light passing around the washer or through its hole is collected by the lens L2 and reaches the detector.

## IV. DISCUSSION AND CONCLUSIONS

Here, we further discuss the advantages and challenges brought about by the coherent field detection in the focal plane of a lens. We stress again the capability of Fourier imaging to enable a better spatial resolution than conventional imaging with a given lens because it is possible to place the object closer to the lens, at a position between the lens and focal point where conventional imaging generates virtual images. This does not affect the image reconstruction capability of Fourier imaging, which then benefits—at closer distance to the lens—from the reduced lens-aperture-induced obstruction of the *k*-space components of the radiation coming from the object.

We also emphasize the strength of Fourier imaging to recover 3D space information by sequential object-plane reconstruction via the variation of the distance *d* in the back-transformation process. While this capability is intrinsic to all coherent imaging techniques, Fourier imaging—owing to the focusing effect of the lens—has the advantage that the beam area to be covered is smallest. Thus, it lends itself to the use of heterodyne focal plane arrays. This can be especially beneficial for the terahertz frequency regime given the practical restrictions for detector arrays arising from the large wavelength of the radiation. The heterodyne operation can be achieved in a subharmonic manner,^{37} thus alleviating the costs for a second high-frequency radiation source. A specific benefit arising from the capability of numerical focusing is that two opaque targets, although they are in each other’s beam path, can be distinguished from each other using the field map of a single focal-plane frame. The requirement is to have enough light intensity available at the detector from each object. This object distinction as well as most other aspects of Fourier imaging depends critically on the avoidance of detrimental standing waves, which arise if the coherence length of the radiation is too large. These standing waves usually cannot be taken into account properly in the back-transformation process, even after careful calibration of the illumination beam prior to the imaging process, because the objects to be studied are not known well enough to account for the change in the field distribution caused by them.

## ACKNOWLEDGMENTS

Technical help by Dovile Čibiraitė and helpful discussions with Mark Thomson, Zhumei Sun, and Peter Haring Bolívar are acknowledged. H.Y. is supported by a scholarship from the China Scholarship Council (CSC Grant No. 201606030113). A.L. is thankful for the support from the Foundation for Polish Science grant IRA CENTERA. Other funding has been obtained from EU FP7-MC-IAPP HYPERIAS (Grant No. 324445).

### APPENDIX A: IMAGE RECONSTRUCTION

The distribution of the electric field coming from the scene and arriving in the focal plane of the lens L2 is described by Eq. (1) in the main part of the document. The integral contained therein can be interpreted as the Fourier transform of the complex transmission function *T*(*X*, *Y*) in an object plane at a distance *d* in front of L2.

In order to reconstruct the transmission function from the complex-valued focal-plane field map, we have implemented a MATLAB routine that utilizes the inverse 2D FFT to calculate the light field at any given distance *d* in front of the focusing lens (taking a deliberate tilt of the object into account, see above). The routine returns the relative amplitude and the phase of the scattered target wavefront. In order to smoothen the reconstructed image, the field-map area is expanded by a factor of two by padding zeros^{38} around the raw data. An example of a reconstruction is given in Fig. 8. It shows a metal grid consisting of 2.5-mm-wide aluminum stripes with a spatial period of 7.5 mm, the recorded amplitude and phase maps, and the reconstructed amplitude and phase at the object plane. The grid is well resolved although the stripe width is close to the resolution limit of 1.875 mm calculated in Appendix B.

### APPENDIX B: LATERAL RESOLUTION

Owing to the properties of the spatial Fourier transformation, a higher scan resolution in the focal plane results in a larger field-of-view in the object plane. A wider scanning area in the focal plane of the lens, on the other hand, converts to a higher lateral resolution in the object plane. The increased lateral resolution finds its limitation, however, when all wave-vectors permitted by the diffraction limit of the lens (or, more generally correct, by the diffraction limit of the entrance pupil of the imaging optics) are recorded by the detector. The lens induces a soft cutoff for wave-vectors with an absolute value of the lateral component $k\Vert \u2a86kcut=k\u2009D/D2+4d2$, where *k* is the absolute value of the wave-vector, *d* is the object distance from the lens, and *D* is the diameter of the lens. If *d* ≫ *D*/2, one has the paraxial approximation *k*_{∥} ⪆ *k*_{cut} = *k D*/2*d*.

In order to find the spatial location of the *k*_{cut} Fourier representation in the focal plane of the lens, we can use geometrical optical arguments. For a point object at *d* > *f* on the optical axis, the real image is located on the optical axis at the image distance *b* = *fd*/(*d* − *f*). The *k*_{cut} Fourier representation is then found in the focal plane at a distance |*y*_{cut}| from the optical axis given by

If the field map in the Fourier plane extends over a width of more than 2|*y*_{cut}|, then the resolution, i.e., the smallest resolved distance *R*_{>} in the object plane, is given approximately by

while for a scan area with a diameter of 2|*y*_{max}| less than 2|*y*_{cut}|, *D* must be replaced by an effective lens aperture *D*_{eff} = 2*b*|*y*_{max}|/(*b* − *f*) = 2|*y*_{max}| *d*/*f*, which yields the resolution

These equations are not only applicable for *d* > *f* but for all values *d* > 0, i.e., also in the regime *f* ≥ *d* > 0, where conventionally taken images are virtual and cannot be recorded with detectors. The numerical refocusing capability of Fourier imaging, in contrast, allows for image reconstruction also in this case.

If the object is closer to the lens, the cut-off value *k*_{cut} for the wave-vectors increases, and one can expect an improved spatial resolution. This is quantified in the following for the measurements of Fig. 5, which have been taken in the regime *f* ≥ *d* > 0. With Eq. (B1), one finds for our experiments (with *D* = 7.6 cm, *f* = 15 cm, and *λ* = 1 mm) for a typical object distance *d* = 10 cm, a value of 2|*y*_{cut}| of 11.4 cm. The lens-passed Fourier components hence span a rather large area in the focal plane. The quadratic scan area in the focal plane is 8.0 × 8.0 cm^{2}. Approximating it by a circular area with a diameter of 2|*y*_{max}| = 8.0 cm, one finds |*y*_{max}| < |*y*_{cut}|. The resolution is hence to be estimated with Eq. (B3), which gives a value of *R*_{<} of 1.875 mm.

Comparing the resolution of the Fourier images with that achieved with conventional imaging in 4*f*-geometry (where the real image has the same size as the reconstructed Fourier image), we can employ the well-known Rayleigh criterion for the minimal resolvable distance between points in the object plane. This distance is given by Eq. (B2) with the additional factor of 1.22 for a spherical aperture. Omitting this factor here, because we have not taken corresponding prefactors into account for Fourier imaging, we obtain with Eq. (B2) for the 4*f*-geometry (where *d* = 2*f*) a value of the resolution *R*_{4f} = *λ*2*f*/*D* of 3.95 mm. *R*_{4f} is more than a factor of two higher than *R*_{<}, which is explained by the smaller object distance used here for Fourier imaging.

### APPENDIX C: FIELD-OF-VIEW

Considering now the field-of-view covered by the object reconstruction, one finds that the extension (diameter) of the field-of-view is given approximately by

where Δ*y* is the step width of the field scanning in the focal plane. For a 1-mm^{2} pixel size in the focal plane, the inverse transformation to the object plane yields a field-of-view of 15 cm.

### APPENDIX D: DEPTH RESOLUTION

We finally address the depth resolution of Fourier imaging. According to Eq. (1) of the main document, the phase change at the edge of the scanned area is Δ*ϕ* = (*π*|*y*_{max}|^{2} · Δ*d*)/(*λf*^{2}), where Δ*d* is the change of the object distance. Using the Fresnel approximation criterion, i.e., that a phase change larger than *π*/2 is recognizable, then the minimum distinguishable object distance change *d*_{min}, i.e., the depth resolution of the system, is given by

If the resolution is defined based on the diameter of the lens, i.e., |*y*_{max}| > |*y*_{cut}|, |*y*_{max}| should be substituted by |*y*_{cut}|. The depth resolution is hence independent of the object distance *d*, a remarkable result.