Artificial Intelligence (AI) is revolutionizing many aspects of our society, developing a wide variety of real-life applications, from decision-making tasks, such as image classification and autonomous vehicle control, to engineering design, analysis, and manufacturing, such as inverse design. With the help of deep learning algorithms to identify recurring patterns based on previously collected data, an AI system can predict future events and make decisions.
The huge success of AI largely benefits from the rapid advances of deep neural networks and the computational complexity, which requires dedicated hardware accelerators. Matrix multiplication is an essential but computationally intensive step. To increase the performance of this step compared to Central Processing Units (CPUs), Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), and Application-Specific Integrated Circuits (ASICs) have received extensive interest to be used as AI accelerators. However, performance increases of these digital electronic solutions will eventually face limitations due to the end of Moore’s law and Dennard scaling. Here, photonics has long been recognized as a promising alternative to address the fan-in and fan-out problems for linear algebra processors.1,2 Indeed, while study reveals that the key performance limitation on the electronic processors is power consumption, where data movement (rather than processing) dominates due to RC limitation, photonics can move data relatively freely and can take advantage of the bandwidth that matches the photodetection rate.
Figure 1 illustrates how the optical matrix multiplier has evolved since the 1970s with free-space optics,3 to the 1980s with fiber optics,4 and eventually to current solutions with integrated photonics.5 The growing capability of photonic integrated circuits (PICs) manufacturing at unprecedented levels of integration and complexity has ignited the field of photonic accelerators for AI. The current success of photonic integration stems from modern information technologies; however, the continuing innovations are stretching the limits of the established platforms, such as the crucial need for photonic nonlinear activation components that completes the full processing loop of deep neural networks in the optical domain. Undoubtedly, AI can, in turn, facilitate the design of complex photonics components and systems.
As a follow-up to a previous APL Photonics special issue on photonics and AI6 in 2020, the current 2022 APL Photonics special issue discusses the status and future perspectives on photonics and AI in IT, among other topics, highlighting both the role of photonics for AI and the role of AI for photonics.
As an example of papers highlighting how upcoming AI techniques can influence the field of photonics, Chen and Dal Negro7 propose in an invited paper a deep learning approach based on physics-informed neural networks that can be utilized for retrieval of the optical parameters of nanophotonic structures based on non-invasive near-field imaging. Similarly, in a contributed paper, Rendón-Barraza et al.8 use deep learning enabled analysis of diffraction patterns as a critical part of a deeply sub-wavelength non-contact optical metrology of sub-wavelength objects. In another contributed paper, Zhang et al.,9 on the other hand, show how deep learning can be utilized in the design of random metasurfaces, while highlighting the pitfalls as they notice that no single universal deep convolutional neural network model works well for all the metasurfaces classes studied in their paper.
In addition, several papers in this issue showcase how photonic hardware can enable either accelerators or system architectures to either speed-up or improve the energy-efficiency of core routines in AI. El Srouji et al.10 have written a tutorial covering architectures, technologies, learning algorithms, and benchmarking for photonic and optoelectronic neuromorphic computing engines. As an example of a neuromorphic photonic system, in an invited paper, Hejda et al.11 use off-the-shelf fiber-optic components with operation at telecom wavelengths to experimentally demonstrate how a vertical-cavity surface-emitting laser (VCSEL)-based photonic spiking neuron can encode a digital image into continuous, rate-coded (at GHz-speeds) spike trains. Lamon et al.12 share their perspective on the critical problem of optical data storage, which is of utmost importance for many machine learning applications and hence also for the underlying hardware. Zhu et al.13 share their perspective on distribution of deep learning training workloads by proposing a system architecture leveraging silicon photonics to accelerate deep learning training. Nevin et al.14 wrote a tutorial describing how machine learning techniques can be utilized in optical fiber communication systems, not only providing a literature survey but also highlighting some promising avenues for upcoming techniques such as explainable machine learning, digital twins and physics-informed machine learning for the physical layer, and graph-based machine learning for the networking layer.
Furthermore, the special issue contains a selection of papers that specifically investigate the prospects of integrated photonic hardware for AI accelerators. Singh et al.15 address in a contributed paper the need to co-simulate both the optical and electronic components in large-scale neuromorphic photonic integrated circuits on a single platform by proposing a Verilog-A based approach and illustrating the approach for a single photonic neuron circuit. In an invited article, Yi et al.16 take inspiration from the reconfigurability of field-programmable gate arrays in electronics and propose an integrated coherent network of micro-ring resonators that can emulate optical filters, optical delay lines, optical space switching fabric, high extinction ratio Mach-Zehnder interferometers, and photonic differentiation by controlling the phases in the arms of an interferometric mesh. Amin et al.17 wrote an APL Photonics Editor’s Pick, demonstrating a novel electro-optic device, which can be used to tune the shape of the neuron nonlinearity based on an ITO-graphene heterojunction integrated absorption modulator in a Si-photonics platform. In a contributed paper, Xiao et al.18 use a hybrid III-V-on-silicon MOSCAP platform to illustrate how tensorized neural networks can be emulated, which require far fewer optical devices than other photonic neural architectures, leading to increased efficiencies in footprint and energy consumption. Shi et al.19 explore in an invited paper the usage of semiconductor optical amplifiers in an InP integrated photonics platform to emulate multi-layer neural networks, achieving a doubling in computation speed and 12× improvement in energy-efficiency compared to graphics processing units (GPU). Finally, Al-Qasadi et al.20 determine the bounds on energy-efficiency and scaling limits for today’s silicon photonics technology and share their perspective on future research directions that would allow us to overcome these current limitations.
In conclusion, there is a fruitful cross-fertilization between the photonics and AI research communities, providing solutions to some aggressive performance targets and design challenges in currently developed IT systems. We are hopeful that the selection of articles included in this APL Photonics special issue will both inspire research communities which bottlenecks to address next and provide guidance on which IT applications can benefit from this cross-fertilization.
We are grateful for the support by APL Photonics editors Yikai Su, Editor-in-Chief Benjamin Eggleton, and editorial managers Jessica Trudeau and Jenny Stein. Furthermore, we sincerely thank all the authors who shared their valuable insights in the field in this special issue.
Conflict of Interest
The authors have no conflicts to disclose.
Qixiang Cheng: Writing – original draft (equal). Madeleine Glick: Writing – original draft (equal). Thomas Van Vaerenbergh: Writing – original draft (equal).
Data sharing is not applicable to this article as no new data were created or analyzed in this study.