Photonic accelerators for Artificial Intelligence (AI) are rapidly advancing, promising to provide revolutionary computational speed for modern AI architectures. By leveraging photons with a bandwidth higher than 100 THz, photonic accelerators tackle the computational demands of AI tasks that GHz electronics alone cannot meet. Photonics accelerators integrate circuitry for matrix–vector operators and ultra-fast feature extractors, enabling energy-efficient and parallel computations that prove crucial for the training and inference of AI models in various applications, including classification, segmentation, and feature extraction. This Perspective discusses modern challenges and opportunities that optical computations open in AI for research and industry.

Modern Artificial Intelligence (AI) architectures are confronting a computational power challenge that underpins present and future applications of this technology, especially the recently developed foundation models.1–4 The latest report from OpenAI5 divides the development of AI into two distinct phases, according to the training computational power required. Before 2012, AI computing power doubled approximately every two years, following the empiric paradigm of Moore’s law6 that satisfied the required demand (Fig. 1, white area). The subsequent AI stage witnessed a sharp acceleration, with computing power requirements doubling every three months (Fig. 1, darker area).

FIG. 1.

The total amount of compute, in petaflop/s-days. Reproduced with permission from D. Amodei and D. Hernandez, https://openai.com/research/ai-and-compute. Copyright 2018 OpenAI.

FIG. 1.

The total amount of compute, in petaflop/s-days. Reproduced with permission from D. Amodei and D. Hernandez, https://openai.com/research/ai-and-compute. Copyright 2018 OpenAI.

Close modal

The escalating demand for computational resources has profound implications for the training of foundation models, rapidly becoming the new cornerstone of AI capabilities.5,7,8 Foundation models refer to pre-trained neural network architectures designed to capture generic and broad patterns in data.9,10 These neural architectures are now fundamental building blocks for various tasks and applications, providing the starting point for further specialization. While the size of foundation models grows to accommodate the complexities of modern tasks in speech recognition, image classification, and natural language processing, the training process becomes increasingly resource-intensive.5,11,12 In this process, the training iterations require adjusting hundreds of millions of parameters to minimize errors, demanding substantial computational power ranging in the thousands of petaflop/s-days5,11 (Fig. 1).

Traditional electronic central processing units (CPUs) struggle to accommodate such a rapid growth of computational demands, providing a bottleneck to future AI expansions.13–15 The first issue is that traditional CPUs follow Moore’s law, which lags behind the second phase of modern AI.3,5,16 On the other hand, the semiconductor industry is approaching physical limitations by transitioning between nanometer nodes, struggling to maintain the basic scaling of Moore’s law, and approaching a plateau.17 In this context, the training of foundation models faces the challenge of accommodating increasingly sophisticated architectures while finding available resources to meet the required computational demand. Addressing this problem is becoming a paramount issue to avoid a second winter in the AI industry in the future.5 

Optical accelerators, such as analog optical processors, are emerging as promising solutions to address this problem.13,15,16,18–23 Photonics accelerators are the front end for digital systems, providing an alternative to traditional digital software processing units.22,24,25 Optical computation represents a method for light information processing without the necessity for conversion into slower electronic signals.26 Photonics processing offers numerous advantages, including speed with rates of hundreds of terabits per second (TB/s),13 enhanced power efficiency due to minimized heat generation from optical signals, and long-distance transmission with minimal energy loss.15,18 The inherent parallelism of light, which can concurrently traverse multiple pathways, empowers optical computations to perform multiple operations in parallel at low power requirements, on the order of a few watts per teraflop.13,27

The complex-valued phase profile of optical waves enables complex-valued optical neural networks (CV-ONNs), another notable development in photonic computing.28 These networks extend the capabilities of conventional neural networks by incorporating complex numbers, encompassing both real and imaginary components, into their computations. This enables CV-ONNs to model and process a broader range of data patterns, including phase information and interference effects, which are often crucial in various scientific and engineering applications.

Within the domain of deep neural networks (DNNs), optical accelerators exhibit high performances in executing fundamental tasks, such as matrix multiplications, spatial 2D convolutions, including first- and second-order derivatives for edge detection, and element-wise nonlinear activation functions.14,15,19,24,25,27,29,30 Photonic acceleration enhances computing speed between 3 and 6 orders of magnitude over electronic devices.14 The speed enhancement allows designing highly performing linear vector–vector (V–V) and matrix–vector (M–V) multiplications, the backbone of convolution operations in DNNs. These operations tend to be the most computationally intensive components in common DNN architectures, accounting for over 90% of their floating-point operations (FLOPs).13,14,19

Optical processor configurations exist both in free-space15,16,19,20,23 and integrated on-chip,21,22 yielding already mature implementations.13,18,26 Enhanced processing capabilities enable accelerated training and inference for neural networks, empowering researchers to train AI models faster with larger networks while tackling problems previously beyond the reach of conventional electronics, such as hyperspectral video understanding.31,32

At present, free-space optical architectures are among the most performing approaches for optical M–V multiplication.14,19,23 These systems exploit dynamical spatial multiplexing, which can easily integrate within photodetector systems.15,16,19,20,23 The main computational framework for V–V and M–V products in free space unfolds in two fundamental steps. The first step is element-wise multiplication [Fig. 2(a)]. Each input pixel representing xj aligns spatially and projects pixel-by-pixel onto the transmissive element with dynamically controlled optical characteristics, usually a Spatial Light Modulator (SLM). The SLM sets its transmissivity proportionally to wij, resulting in the scalar multiplication operation wijxj. The second step is optical fan-in [Fig. 2(b)]. The modulated pixels are combined physically by directing the transmitted light onto a single detector, leading to a photon count directly proportional to the resulting dot product yi.

FIG. 2.

Examples of recently proposed hardware optical accelerator architectures for matrix multiplication (a)–(e) and feature extraction (c). Reproduced with permission from Wang et al., Nat. Commun. 13, 123 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution (CC BY) license; reproduced with permission from Wang et al., Nat. Photonics 17, 408 (2023). Copyright 2023 Springer Nature; and reproduced with permission from Makarenko et al., in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, 2022. Copyright 2022 IEEE, pp. 12682–12692.

FIG. 2.

Examples of recently proposed hardware optical accelerator architectures for matrix multiplication (a)–(e) and feature extraction (c). Reproduced with permission from Wang et al., Nat. Commun. 13, 123 (2022). Copyright 2022 Author(s), licensed under a Creative Commons Attribution (CC BY) license; reproduced with permission from Wang et al., Nat. Photonics 17, 408 (2023). Copyright 2023 Springer Nature; and reproduced with permission from Makarenko et al., in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, 2022. Copyright 2022 IEEE, pp. 12682–12692.

Close modal

In a recent study,14 the authors propose a linear coherent matrix-multiplication framework based on spatial multiplexing via SLM. The computation process entails the evaluation of V–V dot products, denoted as yi = x · wi = ∑jxjwij, where x represents the input vector of neural activations from the preceding layer and wi represents the weight vector connecting neuron nodes in x to the ith neuron in the subsequent layer. Each constituent xj of x encodes the intensity of an individual spatial mode illuminated by an organic light emitting diode (OLED) pixel, while each weight wij depicts the transmissivity of a modulator pixel. This architectural design permits parallel computation of up to 711 × 711 = 505 521 scalar multiplications and additions. The optical strategy minimizes energy consumption by aggregating dot products by summing up the spatial modes on a single detector. For a substantial vector size N, the signal-to-noise ratio (SNR) scales in proportion to N at the shot-noise limit, facilitating accurate determination of the dot-product output, even when individual modes exhibit relatively low average photon counts.14 This capacity to perform dot products involving expansive vectors with minimal energy utilization constitutes a distinctive advantage of optical methodologies.

In the follow-up work,23 the authors propose a more advanced architecture that expands the previous linear framework and incorporates an element-wise Optoelectronic Opaque Nonlinearity Activation (OONA) layer situated between two fully connected optical linear layers, introducing nonlinearity to the model [Fig. 2(c)]. The linear layers implement M–V multiplications using broadband, incoherent light as direct inputs, utilizing techniques similar to previous studies. Natural input images are distributed via microlens-based fan-out, with multiplication achieved through spatial light modulation of attenuated image copies based on weight matrix components [see Fig. 2(d)]. The OONA layer processes the resulting vectors, using saturating nonlinear responses through an image intensifier tube to preserve the ONN’s spatial parallelism [Fig. 2(e)]. The OONA layer optimizes computational efficiency by avoiding read-out/read-in costs associated with separate electronic processing of nonlinear activations. The second ONN layer follows a similar optical M–V multiplier configuration, culminating in output detection through a camera or photodetector array and extending the architecture’s capacity to include nonlinearity. In this study, the authors showcase the capabilities of these networks by applying them to variety of different image understanding tasks, such as Modified National Institute of Standards and Technology (MNIST) digit classification, image reconstruction, or anomaly cell-organelle classification. Significantly, this methodology extends beyond image analysis, as demonstrated in a recent study by Valensise et al.,33 which employs a similar approach to implement optical encoding for natural language sentences.

The potential of optical accelerators for enabling new applications of artificial intelligence (AI) extends beyond computational acceleration, opening frontiers in feature extraction and analysis.32,34–37 While promising enhancements of standard computations in AI, these accelerators augment the dimensionality of feature spaces beyond the grasp of contemporary electronic architectures. This paradigm shift begins in machine vision, where recent work demonstrated optical accelerators integrated within imaging systems for extracting features at speeds larger than ten Tb/s.32,35 These accelerators enable operations such as M–V products,34 spectral encoding,32,36 depth estimation,35 and polarization37 measurements by directly interfacing with optical signals.

A recent example of such a system is a real-time, high-resolution hyperspectral video acquisition and processing system.32 This system leverages the universal approximation ability of inverse-designed metasurfaces composed of nanoresonator units38–40 as optimal spectral encoder hardware for hyperspectral information-related video and imaging tasks. The system comprises two core components: a hardware spectral encoder denoted as E and a complementary software decoder D [Fig. 2(f)]. The encoder transforms high-dimensional hyperspectral data β into a lower-dimensional multispectral feature tensor Ŝ, while the subsequent decoder transforms Ŝ into customized outputs tailored to specific tasks. This approach implements unsupervised dimensionality reduction, which involves finding the best-encoded representation of β through a set of projectors Λ(ω).

The output from the hardware encoder, comprising an array of sub-pixels integrated into a single camera pixel, is captured by a monochromatic camera that acts as an imaging readout layer. The software decoder interprets the Ŝ tensor differently for machine vision tasks, such as hyperspectral reconstruction and semantic segmentation. In the case of hyperspectral reconstruction, the objective is to minimize loss using the Root Mean Squared Error (RMSE) between ground truth and reconstructed spectral responses. For semantic segmentation, the task is a pixel-level classification of an image. It automatically creates a segmentation mask that identifies different classes as they move the image in real time. The AI-enhanced hyperspectral technology provides a substantial reconstruction quality, achieving results of under 2% reconstruction error rate while maintaining a high image quality at 30 frames per second, and 1 megapixel spatial resolution. This approach enables high-quality hyperspectral imaging at elevated resolutions, opening for real-time applications at video rates of 30 frames per second and above.

The journey toward optical accelerators in high-performance computing introduces opportunities that open new avenues in the future of artificial intelligence. Optical accelerators meet the demands for the escalating computational power of modern AI architectures, harnessing photons’ speed to overcome electronic computing bottlenecks, promising enhanced computational efficiency, improved energy utilization, and intrinsic parallel data processing.

However, alongside the promise of transformation, potential challenges lie ahead that create essential opportunities for researchers. Integrating photonics into AI accelerators demands hardware and software components synchronization, an open problem that requires novel design paradigms. Ensuring seamless compatibility between photonics and electronics while mitigating latency and signal integrity is also challenging. In addition to these issues, the transition to optical accelerators must conclude without compromising the accessibility and affordability of current integrated electronic systems. If research would bridge the gap between cutting-edge results and practical deployment, it will require concerted efforts to overcome manufacturing complexities while fostering a broader ecosystem that can support the mass adoption of new optical technologies. Another critical challenge is integrating optoelectronic AI hardware with cost-effective and energy-efficient broadband nonlinear responses, which could go beyond initial lab-level prototypes. This issue is important to unlocking the full potential of optical accelerators for advanced computational tasks, offering efficient solutions toward fully optical neural networks.

Addressing these questions can enable future AI systems to transcend the boundaries of traditional image-based analysis and delve into richly informative multichannel video streams containing hundreds of information channels. Foundation models, known for their prowess in capturing long-range dependencies within data,11,41,42 are uniquely positioned to exploit the multi-dimensional nature of hyperspectral video, unlocking its latent insights. Future AI systems equipped with hyperspectral video understanding abilities can unravel intricate relationships and patterns embedded within the data, offering enhanced potential for applications in fields as diverse as medical diagnostics,43 environmental monitoring,31,44 security,45,46 and beyond.

As the exponential growth in computational demands and the quest for energy-efficient computing converge, the future trajectory of optical accelerators for AI is poised to shape the foundation of high-performance computing and the successful development of a new, modern era of AI.

The authors have no conflicts to disclose.

Maksim Makarenko: Writing – original draft (equal); Writing – review & editing (equal). Qizhou Wang: Visualization (equal); Writing – review & editing (equal). Arturo Burguete-Lopez: Visualization (equal); Writing – review & editing (equal). Andrea Fratalocchi: Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
C.
Wu
,
F.
Wu
,
L.
Lyu
,
Y.
Huang
, and
X.
Xie
,
Nat. Commun.
13
,
2032
(
2022
).
2.
C.
Wu
,
F.
Wu
,
L.
Lyu
,
T.
Qi
,
Y.
Huang
, and
X.
Xie
,
Nat. Commun.
13
,
3091
(
2022
).
3.
I.
Dayan
,
H. R.
Roth
,
A.
Zhong
,
A.
Harouni
,
A.
Gentili
,
A. Z.
Abidin
,
A.
Liu
,
A. B.
Costa
,
B. J.
Wood
,
C.-S.
Tsai
,
C.-H.
Wang
,
C.-N.
Hsu
,
C. K.
Lee
,
P.
Ruan
,
D.
Xu
,
D.
Wu
,
E.
Huang
,
F. C.
Kitamura
,
G.
Lacey
,
G. C.
de Antônio Corradi
,
G.
Nino
,
H.-H.
Shin
,
H.
Obinata
,
H.
Ren
,
J. C.
Crane
,
J.
Tetreault
et al
,
Nat. Med.
27
,
1735
(
2021
).
4.
J.
Lu
,
C.
Clark
,
R.
Zellers
,
R.
Mottaghi
, and
A.
Kembhavi
, “
Unified-IO: A unified model for vision, language, and multi-modal tasks
,” arXiv:2206.08916 [cs.CV] (
2022
).
5.
D.
Amodei
and
D.
Hernandez
, https://openai.com/research/ai-and-compute (
2018
); accessed 27 August 2023.
6.
R. R.
Schaller
,
IEEE Spectrum
34
,
52
(
1997
).
7.
M.
Wornow
,
Y.
Xu
,
R.
Thapa
,
B.
Patel
,
E.
Steinberg
,
S.
Fleming
,
M. A.
Pfeffer
,
J.
Fries
, and
N. H.
Shah
,
npj Digital Med.
6
,
135
(
2023
).
8.
M.
Moor
,
O.
Banerjee
,
Z. S. H.
Abad
,
H. M.
Krumholz
,
J.
Leskovec
,
E. J.
Topol
, and
P.
Rajpurkar
,
Nature
616
,
259
(
2023
).
9.
F.
Ferrari
,
J.
van Dijck
, and
A.
van den Bosch
,
Nat. Mach. Intell.
5
,
818
(
2023
).
10.
J.
Ma
and
B.
Wang
,
Nat. Methods
20
,
953
(
2023
).
11.
OpenAI
, “
Gpt-4 technical report
,” arXiv:2303.08774 [cs.CL] (
2023
).
12.
S.
Reed
,
K.
Zolna
,
E.
Parisotto
,
S. G.
Colmenarejo
,
A.
Novikov
,
G.
Barth-Maron
,
M.
Gimenez
,
Y.
Sulsky
,
J.
Kay
,
J. T.
Springenberg
,
T.
Eccles
,
J.
Bruce
,
A.
Razavi
,
A.
Edwards
,
N.
Heess
,
Y.
Chen
,
R.
Hadsell
,
O.
Vinyals
,
M.
Bordbar
, and
N.
de Freitas
, “
A generalist agent
,”
Trans. Mach. Learning Res.
(published online)
(
2022
), https://openreview.net/forum?id=1ikK0kHjvj.
13.
H.
Zhou
,
J.
Dong
,
J.
Cheng
,
W.
Dong
,
C.
Huang
,
Y.
Shen
,
Q.
Zhang
,
M.
Gu
,
C.
Qian
,
H.
Chen
,
Z.
Ruan
, and
X.
Zhang
,
Light: Sci. Appl.
11
,
30
(
2022
).
14.
T.
Wang
,
S.-Y.
Ma
,
L. G.
Wright
,
T.
Onodera
,
B. C.
Richard
, and
P. L.
McMahon
,
Nat. Commun.
13
,
123
(
2022
).
15.
B. J.
Shastri
,
A. N.
Tait
,
T.
Ferreira de Lima
,
W. H. P.
Pernice
,
H.
Bhaskaran
,
C. D.
Wright
, and
P. R.
Prucnal
,
Nat. Photonics
15
,
102
(
2021
).
16.
R.
Hamerly
,
L.
Bernstein
,
A.
Sludds
,
M.
Soljačić
, and
D.
Englund
,
Phys. Rev. X
9
,
021032
(
2019
).
17.
K.
Roy
,
A.
Jaiswal
, and
P.
Panda
,
Nature
575
,
607
(
2019
).
18.
D.
Zhang
and
Z.
Tan
,
Appl. Sci.
12
,
5338
(
2022
).
19.
H.
Zheng
,
Q.
Liu
,
Y.
Zhou
,
I. I.
Kravchenko
,
Y.
Huo
, and
J.
Valentine
,
Sci. Adv.
8
,
eabo6410
(
2022
).
20.
L.
Bernstein
,
A.
Sludds
,
C.
Panuski
,
S.
Trajtenberg-Mills
,
R.
Hamerly
, and
D.
Englund
,
Sci. Adv.
9
,
eadg7904
(
2023
).
21.
C.
Wu
,
H.
Yu
,
S.
Lee
,
R.
Peng
,
I.
Takeuchi
, and
M.
Li
,
Nat. Commun.
12
,
96
(
2021
).
22.
F.
Ashtiani
,
A. J.
Geers
, and
F.
Aflatouni
,
Nature
606
,
501
(
2022
).
23.
T.
Wang
,
M. M.
Sohoni
,
L. G.
Wright
,
M. M.
Stein
,
S.-Y.
Ma
,
T.
Onodera
,
M. G.
Anderson
, and
P. L.
McMahon
,
Nat. Photonics
17
,
408
(
2023
).
24.
X.
Xu
,
M.
Tan
,
B.
Corcoran
,
J.
Wu
,
A.
Boes
,
T. G.
Nguyen
,
S. T.
Chu
,
B. E.
Little
,
D. G.
Hicks
,
R.
Morandotti
et al
,
Nature
589
,
44
(
2021
).
25.
J.
Feldmann
,
N.
Youngblood
,
M.
Karpov
,
H.
Gehring
,
X.
Li
,
M.
Stappers
,
M.
Le Gallo
,
X.
Fu
,
A.
Lukashchuk
,
A. S.
Raja
et al
,
Nature
589
,
52
(
2021
).
26.
J.
Touch
,
A.-H.
Badawy
, and
V. J.
Sorger
,
Nanophotonics
6
,
503
(
2017
).
27.
H.
Zheng
,
Q.
Liu
,
I. I.
Kravchenko
,
X.
Zhang
,
Y.
Huo
, and
J. G.
Valentine
,
Intelligent multi-channel meta-imagers for accelerating machine vision
,” arXiv:2306.07365 [cs.CV] (
2023
).
28.
H.
Zhang
,
M.
Gu
,
X. D.
Jiang
,
J.
Thompson
,
H.
Cai
,
S.
Paesani
,
R.
Santagati
,
A.
Laing
,
Y.
Zhang
,
M. H.
Yung
,
Y. Z.
Shi
,
F. K.
Muhammad
,
G. Q.
Lo
,
X. S.
Luo
,
B.
Dong
,
D. L.
Kwong
,
L. C.
Kwek
, and
A. Q.
Liu
,
Nat. Commun.
12
,
457
(
2021
).
29.
F.
Zangeneh-Nejad
,
D. L.
Sounas
,
A.
Alù
, and
R.
Fleury
,
Nat. Rev. Mater.
6
,
207
(
2021
).
30.
Z.
Wang
,
G.
Hu
,
X.
Wang
,
X.
Ding
,
K.
Zhang
,
H.
Li
,
S. N.
Burokur
,
Q.
Wu
,
J.
Liu
,
J.
Tan
, and
C.-W.
Qiu
,
Nat. Commun.
13
,
2188
(
2022
).
31.
R. L.
Bosoon Park
,
Hyperspectral Imaging Technology in Food and Agriculture
(
Springer
,
2015
).
32.
M.
Makarenko
,
A.
Burguete-Lopez
,
Q.
Wang
,
F.
Getman
,
S.
Giancola
,
B.
Ghanem
, and
A.
Fratalocchi
, in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
(
IEEE
,
2022
), pp.
12692
12702
.
33.
C. M.
Valensise
,
I.
Grecco
,
D.
Pierangeli
, and
C.
Conti
,
Photonics Res.
10
,
2846
(
2022
).
34.
Q.
Guo
,
Z.
Shi
,
Y.-W.
Huang
,
E.
Alexander
,
C.-W.
Qiu
,
F.
Capasso
, and
T.
Zickler
,
Proc. Natl. Acad. Sci. U. S. A.
116
,
22959
(
2019
).
35.
H.
Kwon
,
E.
Arbabi
,
S. M.
Kamali
,
M.
Faraji-Dana
, and
A.
Faraon
,
Nat. Photonics
14
,
109
(
2020
).
36.
J.
Yang
,
K.
Cui
,
X.
Cai
,
J.
Xiong
,
H.
Zhu
,
S.
Rao
,
S.
Xu
,
Y.
Huang
,
F.
Liu
,
X.
Feng
, and
W.
Zhang
,
Laser Photonics Rev.
16
,
2100663
(
2022
).
37.
N. A.
Rubin
,
G.
D’Aversa
,
P.
Chevalier
,
Z.
Shi
,
W. T.
Chen
, and
F.
Capasso
,
Science
365
,
eaax1839
(
2019
).
38.
F.
Getman
,
M.
Makarenko
,
A.
Burguete-Lopez
, and
A.
Fratalocchi
,
Light: Sci. Appl.
10
,
47
(
2021
).
39.
M.
Makarenko
,
A.
Burguete-Lopez
,
F.
Getman
, and
A.
Fratalocchi
,
Sci. Rep.
10
,
9038
(
2020
).
40.
M.
Makarenko
,
Q.
Wang
,
A.
Burguete-Lopez
,
F.
Getman
, and
A.
Fratalocchi
,
Adv. Intell. Syst.
3
,
2100105
(
2021
).
41.
J.
Hirschberg
and
C. D.
Manning
,
Science
349
,
261
(
2015
).
42.
T. B.
Brown
,
B.
Mann
,
N.
Ryder
,
M.
Subbiah
,
J.
Kaplan
,
P.
Dhariwal
,
A.
Neelakantan
,
P.
Shyam
,
G.
Sastry
,
A.
Askell
et al
, “
Language models are few-shot learners
,” in
Advances in Neural Information Processing Systems
(Curran Associates, Inc.,
2020
), pp.
1877
1901
, https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
43.
G.
Lu
and
B.
Fei
,
J. Biomed. Opt.
19
,
010901
(
2014
).
44.
Airobot
, Hyperspectral drone and software for agriculture (
2021
), https://airobot.eu/solutions/hyperspectral-drone-and-software-for-agriculture/.
45.
X.
Guo
,
M. A.
Khalid
,
I.
Domingos
,
A. L.
Michala
,
M.
Adriko
,
C.
Rowel
,
D.
Ajambo
,
A.
Garrett
,
S.
Kar
,
X.
Yan
,
J.
Reboud
,
E. M.
Tukahebwa
, and
J. M.
Cooper
,
Nat. Electron.
4
,
615
(
2021
).
46.
G.
Martini
,
A.
Bracci
,
L.
Riches
,
S.
Jaiswal
,
M.
Corea
,
J.
Rivers
,
A.
Husain
, and
E.
Omodei
,
Nat. Food
3
,
716
(
2022
).