In this work, we introduce an open-ended question benchmark, ALDbench, to evaluate the performance of large language models (LLMs) in materials synthesis, and, in particular, in the field of atomic layer deposition, a thin film growth technique used in energy applications and microelectronics. Our benchmark comprises questions with a level of difficulty ranging from the graduate level to domain expert current with the state of the art in the field. Human experts reviewed the questions along the criteria of difficulty and specificity, and the model responses along four different criteria: overall quality, specificity, relevance, and accuracy. We ran this benchmark on an instance of OpenAI’s GPT-4o. The responses from the model received a composite quality score of 3.7 on a 1–5 scale, consistent with a passing grade. However, 36% of the questions received at least one below average score. An in-depth analysis of the responses identified at least five instances of suspected hallucination. Finally, we observed statistically significant correlations between the difficulty of the question and the quality of the response, the difficulty of the question and the relevance of the response, the specificity of the question, and the accuracy of the response as graded by the human experts. This emphasizes the need to evaluate LLMs across multiple criteria beyond difficulty or accuracy.

1.
A.
Mirza
et al., “Are large language models superhuman chemists?,” arXiv:2404.01475 (2024).
3.
K. M.
Jablonka
,
P.
Schwaller
,
A.
Ortega-Guerrero
, and
B.
Smit
,
Nat. Mach. Intell.
6
,
161
(
2024
).
4.
A. N.
Rubungo
,
K.
Li
,
J.
Hattrick-Simpers
, and
A. B.
Dieng
, “LLM4Mat-Bench: Benchmarking large language models for materials property prediction,” arXiv:2411.00177 (2024).
6.
E.
Alvaro
and
A.
Yanguas-Gil
,
PLoS One
13
,
e0189137
(
2018
).
7.
K. L. K.
Lee
,
C.
Gonzales
,
M.
Nassar
,
M.
Spellings
,
M.
Galkin
, and
S.
Miret
, “MatSciML: A broad, multi-task benchmark for solid-state materials modeling,” arXiv:2309.05934 (2023).
8.
D.
Hendrycks
,
C.
Burns
,
S.
Basart
,
A.
Zou
,
M.
Mazeika
,
D.
Song
, and
J.
Steinhardt
, “Measuring massive multitask language understanding,” arXiv:2009.03300 (2021).
9.
Y.
Lee
,
H.
Sun
,
M. J.
Young
, and
S. M.
George
,
Chem. Mater.
28
,
2022
(
2016
).
10.
T.
Pilvi
,
T.
Hatanpää
,
E.
Puukilainen
,
K.
Arstila
,
M.
Bischoff
,
U.
Kaiser
,
N.
Kaiser
,
M.
Leskelä
, and
M.
Ritala
,
J. Mater. Chem.
17
,
5077
(
2007
).
11.
Atomic limits ALD database
.”
12.
J. B.
Kim
,
D. R.
Kwon
,
K.
Chakrabarti
,
C.
Lee
,
K. Y.
Oh
, and
J. H.
Lee
,
J. Appl. Phys.
92
,
6739
(
2002
).
13.
M. F. J.
Vos
,
H. C. M.
Knoops
,
R. A.
Synowicki
,
W. M. M.
Kessels
, and
A. J. M.
Mackus
,
Appl. Phys. Lett.
111
,
113105
(
2017
).
14.
N.
Hornsveld
,
W. M. M.
Kessels
,
R. A.
Synowicki
, and
M.
Creatore
,
Phys. Chem. Chem. Phys.
23
,
9304
(
2021
).
15.
J.
Kim
,
D.
Shim
,
Y.
Kim
, and
H.
Chae
,
J. Vac. Sci. Technol. A
40
,
032603
(
2022
).
16.
A. M.
Bran
,
S.
Cox
,
O.
Schilter
,
C.
Baldassari
,
A. D.
White
, and
P.
Schwaller
,
Nat. Mach. Intell.
6
,
525
(
2024
).
17.
D. A.
Boiko
,
R.
MacKnight
,
B.
Kline
, and
G.
Gomes
,
Nature
624
,
570
(
2023
).
18.
Y.
Chiang
,
E.
Hsieh
,
C.-H.
Chou
, and
J.
Riebesell
, “LLaMP: Large language model made powerful for high-fidelity materials knowledge retrieval and distillation,” arXiv:2401.17244 (2024).
19.
J.
Dagdelen
,
A.
Dunn
,
S.
Lee
,
N.
Walker
,
A. S.
Rosen
,
G.
Ceder
,
K. A.
Persson
, and
A.
Jain
,
Nat. Commun.
15
,
1418
(
2024
).
You do not currently have access to this content.