Diffuse optical imaging (DOI) is a label-free, safe, inexpensive, and quantitative imaging modality that provides metabolic and molecular contrast in tissue using visible or near-infrared light. DOI modalities can image up to several centimeters deep in tissue, providing access to a wide range of human tissues and organ sites. DOI technologies have benefitted from several decades of academic research, which has provided a variety of platforms that prioritize imaging depth, resolution, field-of-view, spectral content, and other application-specific criteria. Until recently, however, acquisition and processing speeds have represented a stubborn barrier to further clinical exploration and implementation. Over the last several years, advances in high-speed data acquisition enabled by high-speed digital electronics, newly available sources and detectors, and innovative new scanning methods have led to major improvements in DOI rates. These advances are now being coupled with new data processing algorithms that utilize deep learning and other computationally efficient methods to provide rapid or real-time feedback in the clinic. Together, these improvements have the potential to help advance DOI technologies to the point where major impacts can be made in clinical care. Here, we review recent advances in acquisition and processing speed for several important DOI modalities.
Photon-tissue interactions lay at the core of most medical imaging modalities. High-resolution techniques such as optical microscopy, optical coherence tomography, mammography, and x-ray computed tomography rely on ballistic or quasi-ballistic photons (i.e., photons that are singularly reflected or transmitted with minimal scattering). For modalities that rely on photon transmission through thick tissues, high energy photons (e.g., x rays) must be used to avoid scattering, which can have the unfortunate side-effect of inducing DNA damage. High-resolution modalities using lower energy photons have penetration depths limited to just a few millimeters due to optical scattering, which rapidly degrades the image resolution. In contrast to these techniques, Diffuse optical imaging (DOI) makes use of multiply scattered photons, which can interact with tissue up to several centimeters below the surface.1 Figure 1 (top) shows an example of light propagation through optically scattering media, demonstrating the diffusion of light through the sample volume. The trade-off for the increase in penetration depth achievable with DOI is the increased uncertainty about the probing volume, limiting the spatial resolution to approximately 0.5–10 mm dependent on the specific measurement geometry and tissue type.2,3 For many applications, the advantages of DOI, including label-free, safe, and inexpensive imaging with sensitivity to hemoglobin and other tissue chromophores, make up for this limited spatial resolution. Substantial academic research over the last 25 years has focused on improving the imaging depth, resolution, field-of-view, and spectral content. Only recently, however, have major improvements in both acquisition and processing speed made clear the possibility of rapid or real-time DOI feedback. These improvements have the potential to bring metabolic and molecular information directly and immediately to the treating physician. This advance may help overcome prior barriers in DOI applications, commercialization, and overall clinical adoption.
Our hypothesis in this review is that acquisition, processing, and data visualization speed limitations have played an important role in limiting the clinical impact of DOI and that these throughput issues are being addressed by the academic community and industry at an accelerating rate due to the availability of new hardware and emerging computational tools. Some DOI methodologies quantify tissue-level changes in oxy- and deoxy-hemoglobin, which take place over minutes or hours, so speed may seem like a secondary consideration in these cases. However, measurement speeds can limit the willingness of patients and physicians to undergo the imaging procedure. For instance, a recent large clinical study of breast cancer patients required imaging sessions of nearly 1 h4 to image roughly a 10 × 10 cm2 region of tissue. Faster measurements could enable the same region to be imaged more quickly reducing the burden of the imaging modality on both the patient and medical professionals.
Even if images can be collected quickly, if the data cannot be processed and displayed in a reasonable amount of time, it becomes difficult to know whether the data are of sufficient quality to be useful or whether the targeted tissue was adequately sampled. The usefulness of rapid data processing and visualization is highlighted by the clinical success other imaging methodologies that provide real-time or near real-time feedback (e.g., ultrasound and mammography). Many DOI methodologies require minutes to hours to collect and process images and are not competitive with several standard-of-care imaging modalities in this respect. More rapid processing and data visualization could greatly enhance the diagnostic potential of DOI by reducing latencies in the clinic and ensuring that sufficiently useful information has been captured.
Finally, the development of more widely available rapid DOI systems is also likely to enrich the study of fast hemodynamic and scattering changes. Physiological phenomena such as arterial volume change during the cardiac cycle and venous volume change during the respiratory cycle occur at approximately 0.1–1.7 Hz in healthy humans at rest.5 The photoplethysmographic (PPG) waveform shape has been shown to encode important hemodynamic information, necessitating sampling rates substantially faster than this to capture these dynamics.6,7 In the brain, fast optical scattering changes or event-related optical signals (EROS) have been used to study neuronal activation.8–10 In tumors, oxygenation cycles have been identified to be on timescales of about 15 min,11 with one study showing cycles as fast as one cycle per minute.12 Other studies using high-speed DOI methods have demonstrated that breast tumor hemodynamics behave differently than normal breast tissue following a breath hold,13 which would have been impossible to appreciate using slower variants of DOI. Faster acquisition and processing of DOI data would make it possible to image larger volumes of tissue at higher speeds to uncover subtleties of hemodynamic regulation in disordered tissues such as tumors as well as highly regulated tissues such as the brain.
II. OVERVIEW OF DIFFUSE OPTICAL IMAGING METHODOLOGIES
DOI instruments fall into three main categories: Continuous Wave (CW), Time Domain (TD), and Frequency Domain (FD). CW techniques are typically the simplest in design, least expensive, and most capable of taking rapid measurements. However, CW measurements measure only relative changes in tissue chromophore concentrations.14 In contrast, FD and TD techniques are capable of separating the effects of scattering from absorption, which enables the direct measurement of the tissue absorption coefficient (μa). Multi-wavelength FD and TD measurements can be used to quantify absolute molar concentration oxy- and deoxy-hemoglobin in tissue, which have markedly different extinction spectra in the visible and near-infrared (NIR) (Fig. 2). However, this quantitative information requires significantly more data collection compared to CW techniques, resulting in significant acquisition and processing bottlenecks that impede real-time acquisition and display.
Different implementations of DOI technologies exist within the broad categories of CW, TD, and FD. We review four different commonly utilized technologies here: Time Domain Diffuse Optical Imaging (TD-DOI), Frequency Domain Diffuse Optical Spectroscopy (FD-DOS), Diffuse Optical Topography/Tomography (DOT), and Spatial Frequency Domain Imaging (SFDI). TD-DOI uses short pulses of light to measure the temporal point spread function (TPSF) of tissue, whereas FD-DOS uses temporally modulated light to measure how a sample changes the amplitude and phase of a propagating photon density wave. Both of these techniques can be used to extract tissue absorption and scattering. DOT uses measurements made at many different locations of the tissue to reconstruct the spatial variation of absorbers in the sampled region. While DOT can be performed in CW, TD, or FD mode, CW-DOT is more common due to the large number of locations that need to be measured and the resulting complexity of instrumentation. Finally, SFDI is a wide-field FD technique that measures the response of tissue to spatially modulated light projected at different frequencies to sample the tissue’s modulation transfer function (MTF). The reconstructed MTF can then be used to determine the absorption and scattering properties of the tissue on a pixel by pixel basis.
Each of the above modalities presents different challenges to the increase in acquisition and processing speed. For TD-DOI, building up the TPSF with sufficient temporal resolution to extract absorption and scattering necessitates exposure times of around 100 ms per point to build up adequate statistics,15 limiting the measurement speed. For FD-DOS, the major challenges relate to the hardware needed for rapid data acquisition and algorithms for efficient data processing. For SFDI, switching between spatial frequencies and wavelengths can significantly slow acquisition, and solving the inverse problem for each pixel in an image can be a computational bottleneck without the use of parallel computation or look-up-tables. In CW-DOT, light must be injected and detected at multiple points along the tissue surface using an array of fibers and optical switches, through mechanically scanning or via frequency domain multiplexing,16 which can limit the acquisition speed.
Data processing bottlenecks can also lead to slow imaging speeds. Nearly all DOI methods require an inverse problem to be solved. Typically, a model of photon transport is developed, and the parameters of that model are changed iteratively to match the recorded data. This processing pipeline is outlined in the bottom of Fig. 1. Depending on the model, this iterative solution can be computationally intensive, significantly reducing the rate at which data can be displayed. However, data processing bottlenecks are beginning to be overcome through the use of parallel computing tools such as Graphical Processing Units (GPUs) or by using machine learning techniques such as Deep Neural Networks to avoid iterative methods entirely.
The reconstruction of 2D or 3D images from raw diffuse optical data differs depending on the particular imaging technique and geometry. In reflectance geometry, topographic maps are often generated by placing the data at a single point, typically the midpoint between the source and detector.4 Spatial scanning allows a 2D image to be built up from these single measurement points. This procedure is also used in transmission measurements where the source and detector are scanned over the tissue.16 SFDI reconstructions also assume that the signal arises from a single point and is typically performed at each pixel in the image.2 These single-point methods are a simplification, as the measured signal does not arise from a single location, but is the volume averaged result over the path all photons take between the source and detector. The specific volume being measured by using a single source and detector depends on many factors including the distance between them and the optical properties of the sample. Sophisticated 3D reconstruction algorithms can be used with measurements at multiple locations, which do not assume that each measurement arises from a single location.17 These methods are able to produce volumetric reconstruction of tissues (see Sec. V).
Naturally, different modalities have different spatial sensitivities. For instance, FD measurements are sensitive to different volumes depending on whether phase or amplitude are considered, though both components are capable of probing several centimeters below the surface of tissue.18 SFDI, on the other hand, is a very superficial technique with sensitivity only to the top 5 mm of tissue depending on the spatial frequencies used.19
The remainder of this manuscript will focus on recent research to overcome these bottlenecks for several DOI techniques. In each section, a brief overview of the technique will be followed by a description of the recent improvements in imaging speed.
III. TIME-DOMAIN DIFFUSE OPTICAL IMAGING
Traditional TD-DOI relies on delivering a short pulse of near-infrared light into tissue. Remitted diffusely scattered photons are detected with Time-Correlated Single Photon Counters (TCSPC), which capture the temporal point spread function (TPSF).20,21 This TPSF can be fit to an inverse model to decouple μa and μs′ of the tissue. TD-DOI has proven valuable, especially for depth sensitive imaging of functional activation in the brain.22 However, it has typically suffered from both time-intensive acquisition and processing.
A key component for improving the acquisition time has been the development of new sources and detectors, which allow for both higher speed measurements and multiplexing.21 For example, Vertical Cavity Surface Emitting Lasers (VCSELs) have emerged as an affordable, versatile light source capable of sub-ns pulse widths, making them ideal for TD-DOI. Similarly, fast-gated (<1 ns) single photon avalanche diodes (SPADs) have begun to emerge as a useful TD-DOI detector, particularly for short source–detector separations (Fig. 3). As a proof of concept, a single VCSEL and SPAD were integrated directly into a small probe measuring 25 × 20 mm2. In principle, these elements could become much smaller and multiple VCSELs and gated Silicon Photomultipliers could be integrated onto a single element.20 SPADs have also been utilized in an array capable of parallel detection (up to eight detectors) due to their low cost, allowing for high resolution tomographic reconstruction with limited scanning.23 The use of short-source detector separations can also increase acquisition speeds because there are more detectable photons in each pulse, and thus, fewer pulses are needed to adequately sample the TPSF. Short source–detector distances (less than 1 cm) with SPADs have increased depth sensitivity and signal quality and led to a 105 times increase (103 s vs 108 s) in acquisition speed compared to typical TCSPC methods.24
Utilizing a transmission geometry can also increase the number of detectable photons. For example, an optical mammography system was developed to simultaneously measure 2 wavelengths in ∼150 ms, leading to ∼10 min to acquire a breast image.15,25 Other groups have worked on increasing the information content by including 4 wavelengths with repeated scanning, leading to measurements of about an hour.26 These measurement systems may similarly benefit from technological improvements such as SPADs and VCSELs to yield improved speed as well as sensitivity and accuracy.
A common issue for single photons counting detectors such as TCSPCs arises when more than one photon reaches the detector within each detection period, causing distortions in the measured TPSF. The spread spectrum technique has been implemented as a potential solution to improve cost, sensitivity, and speed.27–30 This technique utilizes pseudo-random bit sequences (PRBS) to modulate a continuous-wave light source according to a generated pattern. The detector is synchronized with a high-resolution timing module such that, as photons propagate through the tissue and reach the detector, the timing module timestamps the detected events. This technique then cross-correlates the detected photon timing events with the known PRBS to build the TPSF much more quickly by capturing up to threefold more photons per excitation cycle. Building off the spread spectrum technique, a recent publication proposed to limit the sampling of the TPSF.30 They demonstrated that sampling at specific Laplace frequencies of the TPSF can encode depth sensitivity by looking at early vs late arriving photons. Using this technique, the amount of time required for a single measurement can be decreased by a factor of 30 (2.5 ms vs 75 ms) by sampling only at a specific frequency rather than measuring the entire TPSF.
Structured illumination for TD-DOI has been shown to improve both the acquisition speed and information content by allowing rapid collection of both spatial and spectral data, eliminating the need of point-by-point raster scanning.31–34 Using a compressive sensing (CS) framework (e.g., projecting Walsh–Hadamard patterns) allows low-resolution spatial information to be determined in about 25 min for tomographic applications. One example of this compressive sensing TD-DOI device has been demonstrated with a broadband supercontinuum laser to provide NIR light (700–820 nm), which is then collected by a hyperspectral detector consisting of a spectrograph and a time-resolved 16-pixel Photomultiplier Tube (PMT) capable of TCSPC.33
Advances in imaging speed such as the ones described here are critical for the adoption of TD-DOI as a clinical tool. Rapid acquisition of data combined with improved spatial resolution and depth penetration will allow for the interrogation of understudied physiological phenomena. For instance, faster temporal acquisition will allow us to monitor fluctuations in tumor oxygenation over time.35 Since the presence of oxygen is critical for radiation therapy to be effective, high speed TD-DOI images could determine regions of tumor that may be hypoxic and thus resistant to radiation therapy. High speed DOI images could also be used to track cycling hypoxia, which has been shown to be indicative of how aggressively a tumor is likely to spread.36 Additionally, new high-speed data processing techniques would be necessary in an intra-operative setting for tumor margin detection during resection.37 When combined with real-time analysis, DOI images could be used to inform surgeons if there is residual tumor signature. These advances may transform TD-DOI from a research tool into a powerful clinical imaging modality.
IV. FREQUENCY DOMAIN DIFFUSE OPTICAL SPECTROSCOPY
In lieu of the short pulses of light used in TD-DOI, frequency domain diffuse optical spectroscopy (FD-DOS) utilizes light sources modulated at radio frequencies (typically 50 MHz to 1 GHz). The effects of optical absorption and scattering are quantified by the measurement of the amplitude and phase of the propagating photon density wave that travels through the sample.38–41 FD-DOS does not require a pulsed laser or photon counting detector, so systems can generally be built using less expensive components and a more compact format than TD methods. One recent work, however, took advantage of the frequency content of a short optical pulse to perform FD measurements.42 The acquisition speed of multi-frequency FD-DOS systems has historically been limited to less than a few Hz due to the time needed to switch between lasers, modulation frequencies, and spatial locations on the sample.43,44 Single frequency systems tend to have acquisition rates closer to 50 Hz45 but can suffer from larger errors because fewer points of the frequency response are measured.46 Recent work has sought to address all three of these issues to dramatically improve the FD-DOS acquisition speed. The steady improvement of FD-DOS acquisition speed since the early 1990s is shown in Fig. 4.
Both time division multiplexing (TDM) and frequency division multiplexing (FDM) approaches have been used to reduce the switching time between modulation frequencies and wavelengths for FD-DOS, with systems now exceeding 100 Hz measurement rates.43,57,62,63 Recent advances in custom digital electronics for FD-DOS have dramatically increased acquisition speeds by allowing for precise control over both RF signal generation and data acquisition. Figure 5(a) shows an example of our group’s recent digital FD-DOS setup, which incorporates direct digital synthesis (DDS) for generating the RF signals that modulate a bank of laser diodes using FDM.47,58 A high-speed analog to digital converter (ADC) digitizes the measured optical signal after detection with an Avalanche Photodiode (APD). The system is currently capable of taking single modulation frequency measurements with kilohertz repetition rates (Fig. 4, data point labeled Torjesen et al.).
There have been several efforts to commercialize FD-DOS technology. One notable instrument is the OxiplexTS (ISS, Champaign, IL), which is capable of taking measurements at 50 Hz. This instrument has been used for numerous studies of the brain and for sports medicine applications.45,64,65 A potential limitation of the ISS system is that it only collects data at a single modulation frequency. While, theoretically, this is sufficient to separate absorption and scattering, the use of additional frequencies may allow for more robust methods of quantifying data quality by examining the residuals of the data fit to the theoretical model.46 Similarly, prototype optical mammography systems developed by Siemens and Carl Zeiss were designed to transilluminate a gently compressed breast using optical fibers that were scanned over the tissue surface. Both instruments used a single modulation frequency (110 MHz in the case of the Siemens instrument and 70 MHz in the case of the Zeiss). Due to the mechanical scanning of the devices, measurement of the full breast took 2–3 min.61,66
The result of the faster acquisition speeds enabled by advances in FD-DOS hardware has led to growing bottlenecks in data processing. To illustrate the scope of the problem, consider the following example: a typical FD-DOS measurement might consist of 50 separate modulation frequencies with 4096 data points collected per frequency, collected 100 times/s over two channels. This leads to 20 MB/s of data that must be acquired, processed, and visualized. Our group has sought to reduce this data burden by performing substantial preprocessing using onboard implementation of data reduction algorithms.47 An example of this is the implementation of the Goertzel algorithm on the FD-DOS firmware or software, which efficiently calculates the phase and amplitude of each measurement, substantially reducing the data burden on the host computer.67 Figure 5(b) outlines the data processing pipeline used with our system, with the incorporation of the Goertzel algorithm implemented on a System on a Chip (SoC) ARM processer.
The inverse problem (i.e., the mapping of amplitude and phase to absorption and scattering) represents an additional data processing bottleneck. Traditionally, the inverse model for FD-DOS is solved using an iterative minimization algorithm in which an initial guess of tissue μa and μs′ is made and is then updated until the theoretical signal matches the measured data within a set threshold.44 Even on modern hardware, when fitting for 36 modulation frequencies at six different wavelengths, this process can only be completed at around 8 Hz, limiting the ability to achieve real-time data visualization. Use of fewer modulation frequencies or wavelengths can increase the processing speed. To solve this problem, our group has trained a neural network to replace the iterative technique [Fig. 5(b)], which has resulted in an increase of 3–5 orders of magnitude in data processing speed, which allowed for video-rate display and feedback.47
High speed acquisition and real-time processing are two necessary components for real-time FD-DOS systems. However, in order to generate an FD-DOS image, a method of scanning the optical source and detector over the surface of tissue must be devised. Until recently, the status quo for FD-DOS imaging has been either complex DOT configurations50,60 or the manual movement of a handheld probe in a point-by-point manner over the tissue of interest.4,39,68 Manual scanning is a laborious process that, when used to monitor breast cancer, requires between 15 min and 1 h/patient. The advent of higher acquisition speeds has allowed for new FD-DOS imaging paradigms to be realized. For example, our group has recently demonstrated the use of an automated scan head to achieve depth resolved images of optical phantoms using digital FD-DOS. We showed that axial line measurements of optical properties could be determined by scanning the source and detector fibers in a hypotrochoidal pattern using an automated mechanical gear system. Manual movement of the scan head across a sample then resulted in B-mode, depth resolved images in a manner similar to ultrasound.69 A high speed FD-DOS scanning system such as this could operate at the point of care, potentially transforming FD-DOS from a research tool to an integral part of the clinical workflow, allowing clinicians to measure tissue metabolism via hemoglobin with the same convenience that they now probe tissue structure using ultrasound.
In addition to improving comfort and convenience for patients, operators, and clinicians, high-speed FD-DOS may also assist in the exploration of new biological and clinical applications. For example, high-speed FD-DOS has been used to measure changes in scattering that occur in neuronal membranes after stimulation.57,62 Similarly, high acquisition speeds allow for monitoring of vascular dynamics on a heartbeat-to-heartbeat basis, which was demonstrated in 1999 for a single modulation frequency56 and more recently for 36 modulation frequencies, as shown in Fig. 5(c).58 These fast signals can be used to calculate pulsatile concentrations of oxy- and deoxy-hemoglobin, as well as arterial oxygen saturation.47 These metrics, when presented in real time, are similar to those of the most ubiquitous diffuse optical tools currently in use: the pulse oximeter. While pulse oximeters can only measure arterial saturation and heart rate, FD-DOS can be used to directly measure hemoglobin concentrations. This feature allows FD-DOS to continue to provide useful data in low-perfusion settings where pulse oximetry fails.
V. DIFFUSE OPTICAL TOPOGRAPHY AND TOMOGRAPHY
Substantial research has been devoted to the development of systems capable of generating 3D images of the brain, breast, and other tissues in order to improve the spatial localization of DOI contrast.70–72 While TD and FD acquisition methods can be utilized for DOT, CW is often employed instead because of the simplicity and speed afforded by CW measurements.73–75 CW-DOT involves the projection of a constant intensity light source into tissue, typically via an array of optical sources and/or optical fibers impinging on the tissue over a wide area, as shown in Fig. 6. The resulting remitted signal is then detected by an array of detectors and/or optical fibers. As pointed out in one recent review of DOT,76 optical topography through the use of the Modified Beer–Lambert Law (MBLL) is commonly performed with CW measurements and is often mistaken for DOT. Many commercially and clinically available NIRS instruments employ this form of data processing.77
While feasibility of the use of NIRS to monitor brain oxygenation was first demonstrated in 1977,78 spatial mapping of functional brain activity in two dimensions was not established until the mid-1990s,79 which marks the first major breakthrough in topography with NIRS. A full review of the history of NIRS, fNIRS, and CW instrumentation is outside the scope of this paper. Those interested in this background are advised to consult the review by Ferrari and Quaresima.80 Here, we highlight recent advances in high speed NIRS topography.
In order to enable fast optical topography with high temporal resolution and real-time display, both fast data acquisition and fast data processing are necessary. Acquisition speed bottlenecks arise from the number of source–detector pairs and are limited by the speed at which unique source–detector pairs can be scanned. As presented in the review by Scholkmann,77 commercially available devices use either time-multiplexing, frequency-multiplexing, or code-multiplexing to distinguish between measurements taken from unique source–detector pairs. Time-multiplexing refers to the mode of operation in which only one source–detector pair is active at a given time. With a relatively small number of sources and detectors (<10 sources, <20 detectors), time resolutions of 2–50 Hz have been achieved by a number of devices, such as PortaLite (Artinis, Einsteinweg, The Netherlands) and fNIR1100 (fNIR, Potomac, MD, USA), using this method. However, in order to achieve time resolutions on the same order of magnitude for larger numbers of source–detector pairs (16–48 sources and 16–32 detectors), frequency-multiplexing has been implemented to allow multiple source–detector pairs to be active simultaneously. In this scheme, the sources for unique source–detector pairs are modulated at particular frequencies distinct from other source frequencies. Detection of multiple channels can occur in parallel by subsequently demodulating the signal at a given detector to reveal the unique signals from various sources. Companies such as TechEn (Milford, MA, USA), NIRx (Berlin, Germany), and Rogue Research (Montreal, QC, Canada) have employed this strategy to achieve time resolutions of 6–10 Hz when using several hundred source-detector pairs and up to 160 Hz when using fewer pairs. Code-multiplexing has also been used, for example, with MRRA’s Genie (Euless, TX, USA), to achieve a 5 Hz time resolution with 16 sources and 32 detectors. This form of multiplexing involves assigning a unique bit sequence to each source–detector pair.
While the differential MBLL is a simple and useful algorithm for generating 2D topographic reconstructions, in order to perform 3D tomography, the associated processing is far more computationally expensive, introducing another major speed bottleneck. In tomography, the detected light field is typically fit to a volumetric model of photon propagation where the absorption and scattering parameters of individual voxels can be estimated. Because the number of voxels is typically much greater than the number of light sources and detectors, this inverse problem for DOT is both ill-posed and under-determined, making solutions non-unique.81 The iterative nature of this technique combined with the size of the matrices involved typically results in a single reconstruction taking tens of minutes to several hours, depending on the number of voxels.82
Tomographic reconstruction algorithms can be divided into two main categories: linear and non-linear approaches. For either approach, both the forward model and inverse model are necessary to extract changes in optical properties, with the forward model commonly employing the finite element method.83 Linearization techniques employ either the Born approximation84 or the Rytov approximation,85 both of which linearly relate perturbations in the detected signal to changes in optical properties through a Jacobian sensitivity matrix. The Born approximation estimates the fluence rate in a medium as the sum of an initial, unperturbed background fluence and a perturbation, whereas the Rytov approximation models the fluence rate as the product of that unperturbed fluence and an exponential term.86 By inverting the Jacobian, the inverse problem can be solved to reconstruct changes in optical properties at each voxel. These approximations, however, only hold for small perturbations to the turbid medium and in many cases provide largely qualitative reconstructions.87 Due to the ill-posed nature of the problem, some form of regularization is necessary to invert the Jacobian matrix. One of the most common regularization and inversion techniques currently in use is the Moore–Penrose generalized pseudo-inversion with Tikhonov regularization, which restricts the L2 norm of the optical properties.3,88 Being non-iterative, these linearization and regularization methods are inherently faster than the conventional nonlinear iterative methods. However, generation and inversion of the Jacobian, especially for large numbers of voxels and source–detector pairs, can still be time-consuming, and sparsity regularization methods have been developed to decrease computational burden and drastically increase the speed of reconstruction.82 The same study also showed that parallelization can be used with multiple central processing units (CPUs) to simultaneously compute sensitivity values between various voxel and source–detector pairs, allowing for additional increases in speed.42 For sensitivity matrices ranging in size from 50 000 to 320 000 nodes (which produce sensitivity errors of ∼10% to ∼4%), the average reduction in processing time was shown to be roughly 230% when using sparsity regularization, with an additional average reduction of 200% when using parallelization. This equates to a reduction in processing times for this range of matrix sizes from 30–470 min to 7–65 min. Relatedly, compressive sensing techniques employing L1 minimization on the sensitivity matrix have been developed, which allow for the construction of sparser matrices, further increasing the reconstruction speed.89 While this study did not report exact reconstruction times, it demonstrated that for an array of 625 source–detector pairs, up to 40% of the source–detector pairs could be eliminated from the sensitivity matrix through compressive sensing while still achieving nearly the same contrast-to-noise ratio in reconstructed images. These techniques exploit the fact that many of the entries in the sensitivity matrix are nearly zero, especially for the cases under the Born and Rytov assumptions, since only small perturbations can be analyzed. This feature allows the sensitivity matrix to be made more sparse, which is computationally more efficient. These reductions in processing time can have significant implications for patient care. A reduction from 470 min to only 1 h for reconstruction could make analysis and subsequent clinical action feasible within a single, more comfortable patient visit, as opposed to requiring multiple visits or full-day waiting periods in the clinic.
Non-linear algorithms are better suited for quantitative imaging than linearization techniques and generally follow an iterative workflow. Two general approaches to this iterative method are Newton-like approaches90 and gradient-based approaches.91 While offering more rapid convergence, Newton-like approaches mandate the repeated construction and inversion of a Jacobian matrix, increasing the necessary computational power. Gradient-based approaches converge more slowly but do not require the repeated inversion of a Jacobian matrix. These iterative methods, still being ill-posed problems, must often incorporate the same regularization techniques described previously. The speed of these non-linear iterative approaches has recently been increased through compressive sensing (CS),92 with one CS-based approach reconstructing anomalies in one case as fast as 46 s, compared to 116 s with a conjugate-gradient method (2.5× speed increase) and 27 h with a standard genetic algorithm (2100× speed increase). Noniterative compressive sensing algorithms, such as the MUSIC (MUltiple SIgnal Classification) algorithm, have also been developed.93 One bottleneck for the first generation of noniterative compressive sensing techniques in DOT occurred if either the source or detector numbers were small, causing the maximum achievable sparsity to be limited by that element, of which there were few. This paradigm did not allow asymmetrical systems, such as those with few sources but many detectors, to achieve maximum sparsity and computational speed. However, more recently in the field of compressive sensing, joint sparse recovery has allowed for enhanced sparsity when the number of sources or detectors is small.94 Speeds have even more recently been increased through the use of neural networks, showing the ability to reduce reconstruction times in one study from 1–2 min to just a few seconds.95 With respect to advances in hardware, the incorporation of GPUs has allowed for up to 300× increase in reconstruction speeds in comparison to CPUs for iterative methods, resulting in a reduction of reconstruction time from ∼38 min to 7 s for a 5000 voxel volume and a reduction to <1 s reconstruction time for smaller volumes.96 This same study demonstrated the ability to monitor mice brains in real time with DOT during a pre-seizure state, demonstrating a significant step forward in the utility of DOT clinically. Of course, for human studies, larger sensitivity matrix sizes would generally be required for modeling the larger anatomy, meaning that further speed increases will be necessary for real-time DOT in humans. For more information on data acquisition, data processing, and clinical applications of DOT, see Ref. 76.
VI. SPATIAL FREQUENCY DOMAIN IMAGING
Unlike the other DOI methods described above, Spatial Frequency Domain Imaging (SFDI) is a non-contact wide-field DOI technique that provides absorption and scattering properties on a pixel-by-pixel basis, typically at relatively shallow tissue depth (i.e., 2–5 mm).2,19 SFDI typically uses sinusoidal spatial patterns of light projected onto a sample and a 2D imaging sensor to detect diffusely scattered light over a wide area.2,97 Tissue must be measured using at least two spatial frequencies to accurately estimate both μa and μs′. A commonly used SFDI method collects images at three offset phases for each spatial frequency, which are then computationally demodulated to isolate the tissue response at each spatial frequency.2 The necessity of collecting multiple exposures for each spatial frequency and wavelength has historically limited the speed of SFDI imaging to below 1 Hz. Once the diffuse reflectance images from two or more spatial frequencies have been collected, μa and μs′ at each pixel can be estimated by an analytical model or through interpolation of a pre-computed look-up table (LUT) generated from either analytical methods or Monte Carlo simulations.98,99 Iterative inverse methods for SFDI are slow since the inverse problem must be solved for each pixel within an image. LUT interpolation is relatively fast (∼5 ms to 10 ms), but since there are often 106 look-up operations required for a single measurement, substantial data processing bottlenecks still limit real time processing and data visualization. Several recent advances have greatly accelerated both SFDI acquisition rates and processing times.
One method recently implemented to reduce the number of exposures needed to complete an SFDI measurement utilized an imaging spectrometer.100 This technique enabled 36 wavelengths to be collected in 3–6 s. However, reconstruction of the imaging spectrometer snapshot images was a computationally intensive process, which limited the rate at which data could be displayed. Other recent work has improved SFDI acquisition speed by making use of the fact that a sinusoidal image contains at least two frequency components: a DC background and the AC carrier. Making use of both of these components reduced the number of images required to estimate optical properties at a single wavelength from six to two101 or even one102 (Fig. 7). To do this, each line of the image is first filtered into DC and AC components. The AC component is then Hilbert transformed to obtain the AC envelope, which is used to calculate the diffuse reflectance at the illumination frequency. Diffuse reflectance images from each of these components were then used to estimate μa and μs′. This general procedure has been improved and expanded to include profilometry information, which allows for 3D video rate SFDI imaging.103
An alternative solution uses different projection patterns to improve acquisition rates. Pattern projection in SFDI is typically done with a digital micromirror device, which displays gray values using pulse width modulation (PWM). For a sinusoidal image, gray values are needed to prevent unwanted harmonics from contaminating the AC image, but PWM limits the rate at which patterns can be displayed. Alternatively, binary patterns can be generated at a much higher rate. Nadeau et al. used square wave projections that were filtered to remove harmonics to increase the rate at which patterns can be projected by 1–2 orders of magnitude.104 This method has been used to investigate blood flow dynamics in the mouse brain.105 Finally, wavelength multiplexing has also been used to improve the image acquisition rate.106,107 By modulating the source light in time as well as space, the resulting signal from each wavelength can be separated in the frequency domain. This procedure enables an arbitrary number of wavelengths to be imaged simultaneously, which can improve acquisition speeds.
Improvements in data processing speed are also essential for enabling real-time display of SFDI data. Various strategies have been employed to reduce the processing bottleneck. For example, Angelo et al. identified that the major time bottleneck of the LUT approach was the interpolation step.108 By precomputing a dense LUT and then selecting the nearest value matching the measured data, interpolation could be avoided and the rate of data processing could be substantially increased. The authors also examined the use of a parameterized polynomial fit of the LUT, which when used as an inverse algorithm yielded extremely fast calculations, although at the cost of decreased accuracy. Recently, neural networks have been used to solve the inverse problem in SFDI109 and a closely related technique, spatially resolved reflectance.110 For example, Zhao et al. recently demonstrated that a deep neural network could be used to solve the inverse problem associated with SFDI.111 While the speed improvement when two spatial frequencies were used was marginal, there was a significant advantage over the LUT approach when additional spatial frequencies were utilized. This is important as the optical property extraction accuracy can be improved for some tissues when more than two spatial frequencies are used. Finally, parallel computing approaches using Graphics Processing Units (GPUs) have been employed to decrease the time needed to calculate μa and μs′, recently achieving single-frame processing times of less than 2 ms.107
DOI methodologies are potentially powerful clinical tools as they can noninvasively investigate metabolic and molecular changes, often deep below the tissue surface, and are safe for all patients, including high-risk groups such as neonates.112,113 TD, FD, and CW techniques each have pros and cons that must be carefully weighed before deployment in any clinical application. We believe that acquisition, processing, and data visualization bottlenecks are simultaneously working to prevent current DOI methods from having a more substantial clinical impact. In contrast to DOI, many standard-of-care imaging methodologies provide real-time or near real-time feedback (e.g., ultrasound and mammography). This feedback allows for rapid diagnosis by physicians and allows operators to ensure that the data are of sufficient quality during imaging, reducing re-imaging rates. As most DOI methodologies require minutes to hours to collect and process images, they are not competitive with current methods in this respect. Once speed improvements are reliably implemented, DOI modalities have the potential to substantially contribute to the era of personalized and precision medicine. Since measurements can be taken frequently during treatments due to the favorable safety profile and low resource utilization of DOI, treatment regimens could potentially be modified in real-time based on DOI feedback, providing individualized therapeutic strategies that may help overcome the highly variable response profiles of most interventions.
Pulse oximeters, which are ubiquitous in clinical practice, provide a powerful demonstration that the information provided by diffuse optical techniques is of high clinical value. In order for other DOI methods to have a similar clinical impact, we believe that they must provide similarly rapid measurements and ease of interpretation. As demonstrated in this review, there are many exciting advancements under development that directly target measurement, processing, and display speeds. In TD systems, the biggest advances come from new light sources and detectors. In FD systems, advances in digital electronics and rapid inverse models drive speed increases. For DOT and SFDI, new processing paradigms and the use of parallel processors and GPUs increase the speed of processing and display. Together, these advances make DOI more clinically viable and will hopefully enable advanced DOI instruments to become commonplace in a wide range of health care settings.
We gratefully acknowledge funding from the U.S. Department of Defense (DoD) (Award No. W81XWH-15-1-0070) and the American Cancer Society (Award No. RSG14-014-01-CCE).