The image resolution produced by a lens/camera system is limited by the digital sampling frequency of the sensor and the diffraction limit as imposed by the front aperture diameter of the optics. A previous study using Image Phase Alignment Super-Sampling (ImPASS) demonstrated that Fourier phase information from a sequence of slightly displaced images can be used to achieve image resolution beyond the digital sampling frequency. In continuation of that work, this study applies ImPASS to sequences of slightly displaced empirical images for a range of aperture settings. The frames are up-sampled, aligned, and combined into a single frame. Application of Self-Deconvolving Data Restoration Algorithm (SeDDaRA) deconvolution reveals features with higher resolution. The slanted edge technique is applied to the processed images to establish the angular resolution of the system as a function of the effective f-number. When compared to Abbe’s resolution criteria, the measurements reveal that this super-sampling method produces image resolution that subceeds the diffraction limit of the lens/camera system.
Increasing the resolution of an imaging system through non-traditional techniques is often expressed as a means to replace larger optical imaging devices with smaller ones for cost savings or extend the capability of an existing system. This research, however, is also motivated by a pursuit to push imaging technology beyond a fundamental limitation: the resolution of an imaging system being limited by diffraction as determined by the front aperture diameter.1 In this paper, we apply a method of super-sampling image processing to a sequence of slightly displaced images producing a single larger image with an improved image resolution. Application of a standard image resolution measurement reveals that the measured resolution subceeds the diffraction limit for a range of aperture diameters.
The method of super-sampling, sometimes referred to as super-resolution, combines a sequence of low resolution images by using various techniques to create an image with higher resolution. Super-sampling generally consist of three primary steps:2
While viewing a static scene, a sequence of images is captured with small differences in view for each frame.
The image values are placed onto an up-sampled grid to create a single larger image.
A deconvolution method is applied to reveal a high resolution image.
In my recent paper,3 Image Phase Alignment Super-Sampling (ImPASS) was applied to a simulated set of slightly displaced image sequences. Here, the term “phase” refers to the positional component derived from the application of a Fourier transform, having no direct relation to the phase of the electromagnetic wave. The results proved that with phase information, acquired through the combination of up-sampled images, and a model of the missing magnitude information, image resolution can be obtained well beyond the limits imposed by the spatial sampling frequency of the sensor. Super-sampling was accomplished even when random noise and simulated spherical aberrations were added to the images. While that paper also applied ImPASS to a sequence of laboratory images, showing improved image resolution, the measured resolution value was above the diffraction limit. That experiment, however, was limited to the largest aperture setting on the camera, which, as described below, keeps the image resolution above the diffraction limit.
For this study, sequences of slightly displaced images of a resolution chart were taken with a camera as the chart was moved in small increments on a translation stage. A different aperture size was used for each sequence. ImPASS was applied to the image sequences to produce super-sampled images. The resolution was measured for each aperture setting by using the slanted edge technique (International Standard ISO 12233:2017).
The definitions of the terms super-resolution and super-sampling are not applied consistently in the literature.4 For this investigation, definitions are provided to emphasize the two distinct resolution thresholds. Super-sampling is defined as a process that combines a set of low resolution digital images into a single image with higher sampling and image resolution than any raw image in the set. Super-sampling is mostly used to enhance the image resolution of an existing system, as was the case for Voyager Martian images.5 Super-sampling has a spatial frequency threshold of 1/2p, where p is the size of the sensor pixel.
Here, super-resolution is defined as a method that produces image resolution that falls below the diffraction limit of the camera system. This has been reported in the field of microscopy;6–8 however, these methods generally use special illumination or optical elements to achieve improved resolution. For example, structured illumination microscopy requires patterns such as sinusoidal variations to illuminate the target.9–11 Fourier ptychography (FP)12 varies the angle of lighting using an array of LEDs. The wavelength scanning method13 sweeps the illumination through a 10–30 nm range to create the image set. The synthetic aperture method requires imaging through three holographic elements.14 Other methods, such as super-resolution fluorescence microscopy,15 “super-resolution by common-path interferometry,”16 and “stimulated emission depletion microscopy,”6 require special optics and laser illumination. In contrast, ImPASS does not employ special optical elements or require special illumination of the scene. Although ImpASS may prove beneficial for microscopic imaging, this study considers experimentation and application for macroscopic imaging.
The image resolution of a system is typically defined by the minimum distance between two equal point sources at which the existence of two sources, as opposed to one, can be identified. This is expressed in terms of the angular field-of-view for small angles as
where D is the diameter of the front aperture, λ is the wavelength of light, and a is the constant representing which criterion is being used. For the Rayleigh resolution criterion, a = 1.22 and is defined as where the principle maximum of one Airy pattern overlaps with the first minimum of another.1 The Sparrow limit17 is reached when the combined light from the central Airy disks is constant along a line, producing a = 0.94. Physicist Ernst Abbe is credited with defining the wide-field numerical aperture resulting in a = 1.18 This study uses the Abbe criterion as the diffraction limit.
Every digital imaging system is limited in resolution by both the optical diffraction and digital sampling frequency. In spatial frequency (i.e., Fourier) space, these limits operate as low-pass filters and are multiplicative. The low-pass filter effect of digital sampling can be seen in the process of binning, which sums the blocks of pixels to reduce random noise but also lowers the image resolution.19 In most commercial systems, including this one, the sampling frequency dominates at full aperture and the image resolution is limited by pixel size. However, if the sampling frequency is increased, either by decreasing pixel size or through super-sampling, the optical diffraction limit dominates.
A quantitative measurement is required to validate whether a method has achieved super-resolution as defined above. As discussed by Sheppard,20 measuring resolution from an image can produce misleading results. The chosen method should be independent of image contrast or sensitivity8 and thoroughly explained. As such, this paper describes the slant edge resolution measurement process in detail in Sec. VI.
III. EXPERIMENTAL OVERVIEW
The experimental setup is depicted in Fig. 1. A resolution chart, illuminated from behind, was mounted on an XY translation stage that provided translational steps. A machine vision camera with a fixed focus lens was mounted on a stationary tripod located about 1.5 m away. A sequence of “instrument” images was captured while the target position was varied laterally. The images were up-sampled to increase the sampling by a factor of 8, aligned, and combined into a single image.
Application of blind deconvolution to the up-sampled combined image then reveals the improved resolution. As discussed by Caron,3 the deconvolution process serves as a model of spatial frequencies, re-introducing the higher spatial frequency distribution that is truncated by the digital spatial frequency limit and potentially the diffraction limit. Phase information, according to recent reports,21–23 is not limited by the aperture diameter, as the phase is independent of the Fourier magnitude. While adjacent images in the sequence do not differ in magnitude, as a result of the displacement, the image data, as a collection of one-dimensional signals, differ in phase. With ImPASS, this phase information that lies beyond the diffraction limits is recovered by combining the aligned up-sampled images.3 The recovered phase information is multiplied by the magnitude model as introduced by the deconvolution in frequency space. Application of an inverse Fourier transform produces an image with resolution beyond the digital sampling limit and the diffraction limit of a single image capture.3
By using a single camera to capture sequential images with a translated field-of-view but otherwise static scene, this experiment simulates a camera array simultaneously imaging a single scene with slight changes in view. Then, this research has important ramifications for viewing that typically requires large telescopes. Instead of a large monolithic mirror and single telescope, an array of smaller camera/telescope units may potentially be used, reducing cost and complexity. Additional research is required to derive the specifications of such as system.
IV. EXPERIMENTAL SETUP
As depicted in Fig. 1, a Thorlabs USAF resolution chart, sized 50.2 mm per side, is mounted onto two translation steppers to provide incremental motion with 5.1 μm steps in the horizontal and vertical directions . In the image plane, each step was about 1/13.3 the size of a pixel. The chart has a chrome coating where the bars and numbers are transparent and illuminated from behind using an incandescent bulb. Opal glass, placed behind the chart, ensures that illumination is diffuse. The camera, a Sony XCD-50 with a Tamron M118FM50 fixed-focal lens, was placed at DW = 1.45 m from the target. A representative full-frame image is shown in Fig. 2.
A sequence of images was collected by capturing images at 80 target positions. Between the target positions, the stage was moved by an Arricks Robotics MD2 stepper driver, both horizontally and vertically, 5.1 μm per step at the object plane. At each position, 16 images were taken and averaged together. This averaging is not required, but it diminishes the coherent noise (non-static pattern noise) specific to this camera in addition to random noise. Images were taken at 40 positions along two diagonals. Figure 3 shows the relative center positions of the image sequence at the image plane as measured from the images by using phase correlation registration.24,25 In simulations, the “X” pattern produced better results than an “L” pattern with the same number of positions. The variation of the positions from linearity results from a combination of uncertainties in the stepper motor step size and uncertainties in the image registration.
The CCD sensor has 640 × 480 pixels with a pixel dimension of 7.4 μm per side and a bit depth of 14. The lens has a focal length of f = 50 mm and a full aperture diameter of 17.85 mm where the f/# is set at 2.8. Illumination is broadband, but the system has a mild peak at wavelength of 630 nm, as shown in Fig. 4. The angular field of view (FOV) for a pixel is calculated from
where FOVV is the vertical field of view, DW is the distance from the lens to the target, and N is the number of pixels (480) in the vertical direction. An FOVV of 65.2 mm was measured from the target by using the image on the camera as a guide. At a wavelength of 630 nm, the optical cutoff is 35.3 microradians as calculated from Eq. (1), whereas θFOV is 93.7 microradians. As such, image resolution below 0.38 pixels would be considered below the Abbe diffraction limit for this aperture. For an aperture setting of f/12, the limit occurs at 1.61 pixels as measured on the image plane.
V. PROCESSING APPROACH
After a sequence is collected, a phase correlation method is used to measure the translation differences between each frame and the first frame.
Each frame is up-sampled from the original size to 5120 × 3840 pixels in multiple applications of cubic interpolation to achieve a magnification factor of Mss = 8.
Each up-sampled frame is shifted by using the measured translation differences (times eight) to align the frames.
The up-sampled frames are averaged into a single combined frame. The frame is cropped to a size of 1024 × 1024 pixels to ease further processing.
SeDDaRA blind deconvolution30 is applied to the single image to reveal the super-sampled image.
SeDDaRA is a non-iterative blind deconvolution method that creates a point spread function (PSF) by comparing the Fourier magnitude of the target image with the magnitude of a reference image or constant.26 A Wiener filter, also referred to as a pseudo-inverse filter, is applied to deconvolve the PSF from the image. SeDDaRA does not change any values in the phase component.
The ImPASS processing is handled as discreet steps using Digital Optics V++27 for phase correlation, Harris IDL 28 for interpolation and combining, and Quarktet Tria29 for deconvolution, but can be combined into a single process if needed. Processing is fairly efficient requiring less than a minute of processing for most steps. The interpolations in step 2 are the most time-consuming, requiring about 70 min for 80 images on a typical desktop computer.
The blind deconvolution method requires a reference image, acting as the spatial frequency model, and several parameters. A test in Ref. 3, and an example in Sec. VII, demonstrates that the reference image does not need to resemble the scene; however, doing so can improve the results. For this effort, the reference image was a digital representation of a USAF chart, as shown in Fig. 6. The chart differs from the instrument image in orientation, angle, and absence of noise. The reference chart was not scaled in any way to match the dimensions of the instrument or up-sampled images. By virtue of the deconvolution process, no phase information from the reference image is passed to the processed image.
For deconvolution, the Wiener filter parameter C2 and PSF cleaning radius (RoI) parameters were varied during repeated application of SeDDaRA to a single up-sampled combined image. The C2 parameter is a value in the denominator of a Wiener filter that prevents the over-amplification of noise.30 The RoI, used in Tria and expressed in pixels, isolates the PSF in image space by removing noise and features outside of the circle defined by the radius, producing a better deconvolution. The optimum parameters, RoI = 20 and C2 = 0.002, were determined by measuring the resolution, as described below, of the processed results. The parameters and reference image were then held as constants for the processing of the image sequences.
VI. RESOLUTION MEASUREMENT
Resolution targets are a conventional method of measuring the resolution of an imaging system.31 An image analyst determines the smallest set of bars that can be resolved. This approach is not very effective when a limiting factor is the digital sampling in the image plane. Translation of the scene with respect to the pixel positions can result in varying estimates of the resolution. Instead, a one-dimensional point spread function (PSF) was derived from a slanted edge in the image32–34 to establish the image resolution. This process begins with taking multiple profile plots across a slanted edge feature in the image indicated by the graphic arrow in Fig. 2. The plots, as shown in Fig. 7 (top), are taken from consecutive rows. The edge-like sigmoid function, having the form of
is fit to each profile to determine the edge position, a2, with sub-pixel accuracy. Other edge functions have been considered, such as the Fermi function35 or Cauchy function,36 but we found that the sigmoid function produced a better fit to the data.32 The MPFit function in IDL37 was used to estimate the fit parameters, a1, a2, and a3. The sets are shifted according to their respective edge position and combined into a single dataset where the curve is well-populated, as shown in Fig. 7 (bottom). The sigmoid function is again fit to this dataset such that a model of the curve, noiseless and evenly spaced, can be created. The spatial derivative is applied to the simulated edge to create a one-dimensional PSF.
The full-width of the half-maximum (FWHM) of the PSF can be used as a resolution measure; however, the PSF can also be used to create a direct relationship with Rayleigh’s, Abbe’s, or Sparrow’s criteria. These criteria are met when two PSFs are located a distance apart such that the value of the valley is a specific ratio of one of the peaks. The resolution can then be identified by the ratio of the center valley to the height of one of the peaks. For the well-known Rayleigh cutoff, this value is 0.737. It is 0.95 for the Abbe’s diffraction limit and slightly less than unity for Sparrow limit.38 The derivative of Eq. (3) with respect to the spatial variable x is
Two PSFs are positioned such that each maximum falls at −Δx and Δx putting the minimum of the valley at x = 0. The sum of the PSFs is
The ratio of the sum at x = Δx and x = 0 is
with the last term in the denominator results from σ(0) = 0.5. As such, R(Δx) can be plotted to find where R(Δx) crosses the values for the criteria. This curve is plotted in Fig. 8 (top) where the separation distances for the criteria have been labeled. The resolution according to the specific criterion is then 2|Δx|. Figure 8 (bottom) shows the sum of two equal PSFs that meet Abbe’s criterion.
VII. EXPERIMENTAL RESULTS
The reason for moving the translation stage is to sample the area of a single pixel with multiple displaced images. For a magnification factor of Mss = 8, this requires eight horizontal steps and eight vertical steps to sample a pixel, per the paper.3 The current setup cannot sample a pixel in object space with this level of accuracy, requiring additional positions to provide assurance that the pixel space is sampled. As such, 16 frames at 80 positions in total were recorded across a distance of 3 pixels in the image plane in an “X” pattern, as shown in Fig. 3.
After each sequence, the lens iris was changed and the camera was re-focused. The exposure time was reset to provide a maximum value in the image to about 52 000 Digital Numbers (DNs), below saturation at 60 000 DNs.
After capture, the sequences are processed according to the algorithm described in Sec. V. The combined image, shown in Fig. 9 (bottom) below a close-up on the instrument image, shows additional detail that cannot be seen in a single instrument image as evident by the “vertical” bars in group 1. This visual improvement, however, is a result of averaging pixelated images and, as will be seen, does not result in a significant improvement in resolution.
SeDDaRA is then applied to the image to produce a super-sampled result, as shown in Fig. 10. Processing artifacts, in the form of ripples, appear as a result of mismatched spatial frequencies that are present in the reference image but not in the target image. The improvement in resolution is easily discerned. The horizontal bars of group 1, element 5, and vertical bars of group 1, element 6, are clearly resolved as is some structure in groups 4 and 5.
Figure 11 shows profile plots, expressed as a function of viewing angle, taken down the horizontal bars in group 1, as marked by the stars in Fig. 9 (top). The profile plots of the instrument image and combined image show that resolution is somewhat improved by aligning and averaging the images. The images appear to have improved resolution since the algorithm is averaging over the digitization of the scene in the image plane. The super-sampling is only apparent after deconvolution has been applied to the image, as shown in the profile plot in Fig. 9 (bottom).
Results are presented as a function of effective f-number. Also referred to as the working f-number, feff/# is a modification that takes into account that the object plane is not actually at infinity and is calculated using feff/# = (1 + |M|)f/#, where |M| is the magnification.39 The direct measurement of the aperture diameter is complicated by the iris being enclosed in the casing immediately behind the front lens. To verify f/# values, the diameter for the lowest setting was measured with calipers and confirmed to match the manufacturer’s specification. Averaged images were captured of a white screen for each iris diameter setting in addition to a dark frame. The measured f/# was then calculated using
where f is the focal length, D2.8 is the diameter of the aperture for f/2.8, is the average pixel value for the image captured with setting f/#, is the average pixel value of the dark frame, and is the average pixel value for the image captured with setting f/2.8. The measured values, shown in Table I, are consistent with effective f-numbers. The errors are not viewed as significant and are the likely result of uncertainty in the position of the knob on the lens casing.
|f/# .||feff/# .||Meas. f/# .||Error (%) .|
|f/# .||feff/# .||Meas. f/# .||Error (%) .|
Resolution measurements were applied to the instrument image, the combined up-sampled image, and the processed image for each iris diameter size. Figure 12 (top) shows the measurements from an instrument image taken as a function of the effective f/#. Diffraction limit values are also displayed, as calculated from Eq. (1), for a = 1. The resolution measurement follows the diffraction limit line for smaller aperture diameters but slopes upward for increased iris diameter as a result of the greater prominence of additional aberrations.1
The resolution measurements for the combined image are shown in Fig. 12 (bottom). The measurements for the higher f-numbers are similar to those of the instrument image and remain above the diffraction limit.
The resolution measurements for the processed image are shown in Fig. 13 (top). For the larger f-numbers, feff/12.4 to feff/22.8, the resolution obtained by applying ImPASS is significantly below the diffraction limit. The greatest improvement is at feff/12.4 where resolution achieved with ImPASS is 68% of the optical diffraction limit. The paths cross around feff/10 where the measured resolution rises above the criterion. Optimum resolution is achieved at feff/2.9, providing 30% of the measured resolution as compared to resolution in the instrument image. Figure 13 (bottom) shows the resolution measurements using Rayleigh’s criterion. The results demonstrate that super-resolution is possible for macroscopic imaging systems.
To establish the variations in the resolution measurements, the edge response method was applied repeatedly to different locations in the instrument image, combined up-sampled image, and processed image for the feff/12.4 sequence. The variation as expressed by the ratio of the standard deviation to the average was about 0.8% for both the instrument and combined up-sampled images. As a result of the image processing artifacts, the variation of resolution in the processed images is higher at 6.9%. These values are expressed as vertical error bars in the graphs but are obscured by the data markers.
As mentioned previously, the reference image in the SeDDaRA deconvolution serves as a model of the spatial magnitudes. Typically, the closer the reference image is to the target image, the better the deconvolution, but this is not critical. Figure 14 shows a processed image from the feff/14.2 set that was processed by using the Barcelona image (bottom) as the reference. The processed image is visually similar to Fig. 13 aside from a slight increase in processing artifacts. The measured resolution, 106 microradians, is slightly higher than the case where a digital resolution chart was used as reference (102 microradians); however, it is still significantly below the optical diffraction limit of 156 microradians for this aperture.
The experiment was then repeated using a narrower spectrum. A Corion 10BPF70 650 nm filter with a bandwidth of 70 ± 30 nm waveband was placed in front of the lens. Image sequences were obtained for eight aperture diameters and processed as above. Results for the instrument and processed images are shown in Fig. 15. The resolution of the instrument image improves as would be expected given the smaller PSF. The resolution of the processed image also improves achieving 0.05 milliradians at feff/6.2. The lowest ratio of measured resolution over diffraction limit occurs at feff/14.5 with a value of 0.38. Representative instrument and processed images are shown in Fig. 16 using the same contrast settings.
VIII. ALTERNATIVE RESOLUTION MEASUREMENT
The advantage of the slanted edge measurement is the direct comparison to the resolution in image space. Many methods, such as Fourier Ring Correlation (FRC),42,43 and the more recent circular average power spectral density44 derive the resolution from Fourier space. For FRC, a histogram consisting of concentric rings is created using
where F1(r) and F2(r) are Fourier transforms of two images sharing the same region-of-interest but having independent noise. The variable r is the distance from the ring to the center of the Fourier transform. The image resolution is specified at the point where FRC(r) falls below a threshold, typically set at 0.14.42
The FRC metric was applied to the 650 nm dataset using the sub-image approach45 to determine applicability. The results from four representative aperture settings are shown in Fig. 17. The graph suggests that image resolution subceeds the optical resolution in all cases. As shown in Table II, Abbe’s optical resolution ranges from 3.37 milliradians/cycle to 26.51 mrads/cycle, while the estimated resolution from the FRC curves is above 31 mrads/cycle.
|f/# .||feff/# .||Opt. Res. .||Meas. Res .||Meas. FRC .|
|f/# .||feff/# .||Opt. Res. .||Meas. Res .||Meas. FRC .|
However, this conclusion is incorrect as evident by the lack or correlation with either the optical resolution limit or the slant edge measurements. The FRC method relies on the curve to decrease as uncorrelated noise dominates the higher frequencies. With this image set, images are up-sampled by a factor of 8 before alignment and averaging. As a result, there is little uncorrelated noise at the higher frequencies. The “tails” above 30 cyc/mrad are possibly the result of correlation between image interpolation artifacts. Adding in synthetic noise to the reconstructed image removes these tails; however, this heavily influences the derived resolution threshold. As such, additional investigation is required to modify FRC for this application.
Limitations of ImPASS imposed by the experimental setup are not yet understood. In an effort to improve results, various changes in the setup and parameters were attempted. This included changing the pattern of the target positions from an “X” shape to a “V” shape, increasing the number of positions, measuring the translation differences in various ways, making small changes to the lens focus, applying dark frame and flat field correction, removing frames with poorer alignments, and taking steps to reduce environmental vibrations. These changes had little effect on the outcome. In future tests, the aim is to use a camera with lower noise and larger pixel format, and a lens with fewer aberrations.
While this effort is focused on macroscopic imaging, there are certain similarities between ImPASS and super-resolution methods in microscopy that warrant consideration. For structure illumination microscopy (SIM),40 a sinusoidal pattern of illumination is applied to the target. A series of images are taken while the pattern is shifted and rotated. The images are processed and combined by using a Wiener-like deconvolution method,46 revealing the super-resolved structures. This is similar to ImPASS where an image set is created from translating the target, which is then processed and deconvolved. SIM is, however, typically limited to achieving an improvement of a factor of 2 over the diffraction limit,6 whereas ImPASS achieved 2.63 in this initial test.
Fourier ptychography (FP)12 is also closely related in process. An image set is created by capturing the scene when the angle of illumination, coming from underneath a glass slide, is varied. The angle of illumination is varied by switching on and off individual LEDs in an array, as depicted in Fig. 18. The processing also involves alignment and deconvolution. It is possible that the refraction of light through the glass slide at different angles produces a similar collection of Fourier phase. FP often applies wavefront correction during the reconstruction process to improve results, which is not used for ImPASS.
Whereas ImPASS can be considered for microscopic imaging, it has a unique advantage for macroscopic imaging as it does not require active illumination of the target. Targets such as satellites would not require illumination by laser as is the case with the concept of Fourier telescopes.47 ImPASS could be beneficial for large-scale imaging systems where high resolution is obtained by using telescope/camera arrays that operate separately; however, the combined images possess resolution as if produced by a single large aperture telescope. The concept is an alternative to very large single monolithic telescopes, segmented telescopes, or interferometer-based sparse aperture telescopes.41 A single line array of camera/lens systems operating in a push-broom fashion could potentially serve as a lightweight and cost-effective approach for space-borne Earth-imaging satellites.
This study presented the application of ImPASS to sequences of slightly displaced experimental images while the aperture diameter was varied. The first test was carried out by using panchromatic illumination and the second with 650 ± 70 nm illumination. The images in each sequence were up-sampled, aligned, and combined. Application of deconvolution produced images with improved resolution. The slant edge technique was applied to an edge feature to quantify image resolution. The resolution measurements on processed images reveal that ImPASS produces super-resolution for a range of front aperture diameters. The best panchromatic result achieved a 0.68 ratio of measured resolution over diffraction limit and a ratio of 0.38 was reached for the narrow-band case. Subceeding the diffraction limit by a factor of 2.63 was accomplished with a simple setup consisting of a camera, lens, and movable target and without specialized, adaptive or customized optics. For the larger apertures, ImPASS was able to produce resolution near the diffraction limit but did not subceed the limit. A higher level of second-order aberrations may have an adverse effect in this region.
With rigorous attention to the resolution measurement, this finding provides validation that super-sampling methods can obtain resolutions that subceed the diffraction limit of the instruments. Additional study is warranted to better understand the performance gap between simulations3 and experiment, which can take the form of improved test bed, optical modeling, and theoretical development. Further understanding on how image phase information beyond the diffraction limit is recovered will improve methods, potentially producing new imaging modalities.
Conflict of Interest
The author has no conflicts to disclose.
The data that support the findings of this study are available from the corresponding author upon reasonable request.